Test Report: KVM_Linux_containerd 17965

                    
                      5e5f17cf679477cd200ce76c4e9747d73049443e:2024-01-16:32726
                    
                

Test fail (1/318)

Order failed test Duration
38 TestAddons/parallel/Registry 22.93
x
+
TestAddons/parallel/Registry (22.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.710845ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kbjdn" [b122adb8-2e2d-4dae-8d96-0dd52afdbf2c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.161921992s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8pknn" [5a3c3adb-491a-48ad-8000-fa60b548f2ba] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006309177s
addons_test.go:340: (dbg) Run:  kubectl --context addons-133977 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-133977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-133977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.784076459s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 ip
2024/01/16 02:30:23 [DEBUG] GET http://192.168.39.10:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-133977 addons disable registry --alsologtostderr -v=1: exit status 11 (421.744168ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:30:23.902344  339661 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:30:23.902564  339661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:30:23.902575  339661 out.go:309] Setting ErrFile to fd 2...
	I0116 02:30:23.902580  339661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:30:23.902786  339661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:30:23.903083  339661 mustload.go:65] Loading cluster: addons-133977
	I0116 02:30:23.903486  339661 config.go:182] Loaded profile config "addons-133977": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:30:23.903509  339661 addons.go:597] checking whether the cluster is paused
	I0116 02:30:23.903596  339661 config.go:182] Loaded profile config "addons-133977": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:30:23.903610  339661 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:30:23.904008  339661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:30:23.904061  339661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:30:23.918780  339661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0116 02:30:23.919290  339661 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:30:23.919966  339661 main.go:141] libmachine: Using API Version  1
	I0116 02:30:23.920009  339661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:30:23.920408  339661 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:30:23.920579  339661 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:30:23.922216  339661 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:30:23.922491  339661 ssh_runner.go:195] Run: systemctl --version
	I0116 02:30:23.922521  339661 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:30:23.924782  339661 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:30:23.925151  339661 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:30:23.925184  339661 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:30:23.925296  339661 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:30:23.925525  339661 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:30:23.925736  339661 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:30:23.925916  339661 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:30:24.058733  339661 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 02:30:24.058830  339661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:30:24.180233  339661 cri.go:89] found id: "4744173521ad7687144254eea07ff8aff4223eee041b7cec0b7bda79fc00a1d0"
	I0116 02:30:24.180265  339661 cri.go:89] found id: "955484134a6598f55316fa94ddf4966dfc2a6c1c321518815e1bf64507dd6c19"
	I0116 02:30:24.180270  339661 cri.go:89] found id: "1d681d580c35fb35ca02a07f559813cd2fc4bd2d20b7dac9a6948941ebc25881"
	I0116 02:30:24.180274  339661 cri.go:89] found id: "814f9daed3f31d28e9e96eb3609caa988f2ffd143e4ba8ad56cad2110625ae93"
	I0116 02:30:24.180277  339661 cri.go:89] found id: "16403efb281aa177636d6d4e872cca8da4fd93b5f493acae086bf439e3e0e40b"
	I0116 02:30:24.180285  339661 cri.go:89] found id: "558782553a812dba2aa6ca34af938d6b228a46d9331e89f1397c0ab70fb07a2e"
	I0116 02:30:24.180288  339661 cri.go:89] found id: "b82b85e386977dfdeef77dc75e21382c2a50d388eff2ee7fa3ee2f89a8780591"
	I0116 02:30:24.180292  339661 cri.go:89] found id: "16f548ecef31c5f8c4e45c04546f6291c3ecf18d483da30b0938e23f65f79cde"
	I0116 02:30:24.180295  339661 cri.go:89] found id: "8a684c9600d9a2b635d09036ca01be2870d531a3d8328ef491048d24d181d825"
	I0116 02:30:24.180304  339661 cri.go:89] found id: "08720f95801beab3117abe659aaeb8a24cbddaad9103502692f194ca8e3e00a4"
	I0116 02:30:24.180308  339661 cri.go:89] found id: "2fcfba9cacb9710acb42287b1b9d341d3ff6409274ef76bf5c3e7e6e23b3072d"
	I0116 02:30:24.180311  339661 cri.go:89] found id: "d73e1a3f9c7267d16cce1411a4afe17ffadf25d4ab236b0e51a88e1d8b6a8e8d"
	I0116 02:30:24.180314  339661 cri.go:89] found id: "6afbf073157f75483560b2e0097fe323d6907f8474d4a49840ede5a4194f1044"
	I0116 02:30:24.180320  339661 cri.go:89] found id: "700d89260f1f5f9867fdbc2cce6c68c3196b6f3413749f6b9b800989a625fde5"
	I0116 02:30:24.180326  339661 cri.go:89] found id: "63f9d14fc3ad0a5d6e0369f99c29d1197d88562972841415d8a04ff144ab308d"
	I0116 02:30:24.180331  339661 cri.go:89] found id: "f62213dc4c9d0208f81eb5a7f1f87d416a0cefef5b9fda38fab44aeb8e64f130"
	I0116 02:30:24.180339  339661 cri.go:89] found id: "e336aeb71c83abd766a50ebfa30998dbecc6bf4f13579c9d2af4e33f822ee1d9"
	I0116 02:30:24.180349  339661 cri.go:89] found id: "90bda3e0890684c9575b910a7d6f1683c0d8d0e8a7b878080b41fd2e0fbbf838"
	I0116 02:30:24.180354  339661 cri.go:89] found id: "fad18290a2fb3ffdc2ec8cd0cf2aede88aecc8cf1f8f4bd2d6887ab5a31b9faf"
	I0116 02:30:24.180360  339661 cri.go:89] found id: "4c6ed9edd39f989aa3af24fdca478f068e679f91ca161b1ed7769237931eff14"
	I0116 02:30:24.180366  339661 cri.go:89] found id: "e8ab421fe893e0386476390c2fab37b7caf092084f63f9637a8174acb9b3aac4"
	I0116 02:30:24.180375  339661 cri.go:89] found id: "5e2f6181a70a331a0e57a069e148fe5145e59a44f23b47e3b5528f786f835507"
	I0116 02:30:24.180380  339661 cri.go:89] found id: "21d8aaa786930793df0d28382f1ef22b448769b5d8e93632d9a5f2c2b849ea03"
	I0116 02:30:24.180389  339661 cri.go:89] found id: ""
	I0116 02:30:24.180449  339661 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0116 02:30:24.248516  339661 main.go:141] libmachine: Making call to close driver server
	I0116 02:30:24.248560  339661 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:30:24.248867  339661 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:30:24.248887  339661 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:30:24.248910  339661 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:30:24.251720  339661 out.go:177] 
	W0116 02:30:24.253282  339661 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T02:30:24Z" level=error msg="stat /run/containerd/runc/k8s.io/4744173521ad7687144254eea07ff8aff4223eee041b7cec0b7bda79fc00a1d0: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T02:30:24Z" level=error msg="stat /run/containerd/runc/k8s.io/4744173521ad7687144254eea07ff8aff4223eee041b7cec0b7bda79fc00a1d0: no such file or directory"
	
	W0116 02:30:24.253308  339661 out.go:239] * 
	* 
	W0116 02:30:24.258188  339661 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 02:30:24.259782  339661 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:390: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-133977 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-133977 -n addons-133977
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 logs -n 25: (2.336591912s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-378414              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-378414              | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | -o=json --download-only              | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-612167              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-612167              | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | -o=json --download-only              | download-only-990942 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-990942              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-990942              | download-only-990942 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-378414              | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-612167              | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-990942              | download-only-990942 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | --download-only -p                   | binary-mirror-511930 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | binary-mirror-511930                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45737               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-511930              | binary-mirror-511930 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| addons  | enable dashboard -p                  | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | addons-133977                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | addons-133977                        |                      |         |         |                     |                     |
	| start   | -p addons-133977 --wait=true         | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:30 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:30 UTC | 16 Jan 24 02:30 UTC |
	|         | -p addons-133977                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:30 UTC | 16 Jan 24 02:30 UTC |
	|         | -p addons-133977                     |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:30 UTC | 16 Jan 24 02:30 UTC |
	|         | addons-133977                        |                      |         |         |                     |                     |
	| ip      | addons-133977 ip                     | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:30 UTC | 16 Jan 24 02:30 UTC |
	| addons  | addons-133977 addons disable         | addons-133977        | jenkins | v1.32.0 | 16 Jan 24 02:30 UTC |                     |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:27:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:27:34.820713  338598 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:27:34.820859  338598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:34.820870  338598 out.go:309] Setting ErrFile to fd 2...
	I0116 02:27:34.820875  338598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:34.821080  338598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:27:34.821778  338598 out.go:303] Setting JSON to false
	I0116 02:27:34.822709  338598 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":33007,"bootTime":1705339048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:27:34.822781  338598 start.go:138] virtualization: kvm guest
	I0116 02:27:34.825226  338598 out.go:177] * [addons-133977] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:27:34.826760  338598 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:27:34.826757  338598 notify.go:220] Checking for updates...
	I0116 02:27:34.828215  338598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:27:34.829597  338598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:27:34.830896  338598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:34.832205  338598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:27:34.833560  338598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:27:34.835184  338598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:27:34.869437  338598 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:27:34.870767  338598 start.go:298] selected driver: kvm2
	I0116 02:27:34.870785  338598 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:27:34.870801  338598 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:27:34.871815  338598 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:34.871929  338598 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-330687/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:27:34.886795  338598 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:27:34.886881  338598 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:27:34.887118  338598 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:27:34.887182  338598 cni.go:84] Creating CNI manager for ""
	I0116 02:27:34.887195  338598 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 02:27:34.887207  338598 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:27:34.887216  338598 start_flags.go:321] config:
	{Name:addons-133977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-133977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:27:34.887351  338598 iso.go:125] acquiring lock: {Name:mk83fca54b69be1d8016cc7581ed959170948280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:34.890192  338598 out.go:177] * Starting control plane node addons-133977 in cluster addons-133977
	I0116 02:27:34.891405  338598 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:27:34.891453  338598 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0116 02:27:34.891466  338598 cache.go:56] Caching tarball of preloaded images
	I0116 02:27:34.891581  338598 preload.go:174] Found /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 02:27:34.891606  338598 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0116 02:27:34.891966  338598 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/config.json ...
	I0116 02:27:34.891997  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/config.json: {Name:mk5cc12c6b5769abe51e9a178087e24952075fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:27:34.892191  338598 start.go:365] acquiring machines lock for addons-133977: {Name:mkde42fd95bf5943335c71dd621724b55130d8f3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:27:34.892239  338598 start.go:369] acquired machines lock for "addons-133977" in 33.97µs
	I0116 02:27:34.892259  338598 start.go:93] Provisioning new machine with config: &{Name:addons-133977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-133977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:27:34.892358  338598 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:27:34.894079  338598 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 02:27:34.894212  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:27:34.894263  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:27:34.908874  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0116 02:27:34.909523  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:27:34.910159  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:27:34.910184  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:27:34.910626  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:27:34.910845  338598 main.go:141] libmachine: (addons-133977) Calling .GetMachineName
	I0116 02:27:34.910981  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:27:34.911089  338598 start.go:159] libmachine.API.Create for "addons-133977" (driver="kvm2")
	I0116 02:27:34.911127  338598 client.go:168] LocalClient.Create starting
	I0116 02:27:34.911179  338598 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem
	I0116 02:27:35.270900  338598 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/cert.pem
	I0116 02:27:35.381752  338598 main.go:141] libmachine: Running pre-create checks...
	I0116 02:27:35.381779  338598 main.go:141] libmachine: (addons-133977) Calling .PreCreateCheck
	I0116 02:27:35.382351  338598 main.go:141] libmachine: (addons-133977) Calling .GetConfigRaw
	I0116 02:27:35.382895  338598 main.go:141] libmachine: Creating machine...
	I0116 02:27:35.382912  338598 main.go:141] libmachine: (addons-133977) Calling .Create
	I0116 02:27:35.383063  338598 main.go:141] libmachine: (addons-133977) Creating KVM machine...
	I0116 02:27:35.384383  338598 main.go:141] libmachine: (addons-133977) DBG | found existing default KVM network
	I0116 02:27:35.385119  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:35.384924  338620 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0116 02:27:35.390580  338598 main.go:141] libmachine: (addons-133977) DBG | trying to create private KVM network mk-addons-133977 192.168.39.0/24...
	I0116 02:27:35.464594  338598 main.go:141] libmachine: (addons-133977) DBG | private KVM network mk-addons-133977 192.168.39.0/24 created
	I0116 02:27:35.464630  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:35.464562  338620 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:35.464647  338598 main.go:141] libmachine: (addons-133977) Setting up store path in /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977 ...
	I0116 02:27:35.464666  338598 main.go:141] libmachine: (addons-133977) Building disk image from file:///home/jenkins/minikube-integration/17965-330687/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:27:35.464837  338598 main.go:141] libmachine: (addons-133977) Downloading /home/jenkins/minikube-integration/17965-330687/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-330687/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:27:35.697222  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:35.697062  338620 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa...
	I0116 02:27:35.853771  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:35.853538  338620 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/addons-133977.rawdisk...
	I0116 02:27:35.853825  338598 main.go:141] libmachine: (addons-133977) DBG | Writing magic tar header
	I0116 02:27:35.853838  338598 main.go:141] libmachine: (addons-133977) DBG | Writing SSH key tar header
	I0116 02:27:35.853847  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:35.853700  338620 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977 ...
	I0116 02:27:35.853894  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977
	I0116 02:27:35.853920  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977 (perms=drwx------)
	I0116 02:27:35.853934  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-330687/.minikube/machines
	I0116 02:27:35.853945  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:35.853954  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-330687
	I0116 02:27:35.853961  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:27:35.853971  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:27:35.853988  338598 main.go:141] libmachine: (addons-133977) DBG | Checking permissions on dir: /home
	I0116 02:27:35.853997  338598 main.go:141] libmachine: (addons-133977) DBG | Skipping /home - not owner
	I0116 02:27:35.854019  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins/minikube-integration/17965-330687/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:27:35.854035  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins/minikube-integration/17965-330687/.minikube (perms=drwxr-xr-x)
	I0116 02:27:35.854046  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins/minikube-integration/17965-330687 (perms=drwxrwxr-x)
	I0116 02:27:35.854053  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:27:35.854064  338598 main.go:141] libmachine: (addons-133977) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:27:35.854073  338598 main.go:141] libmachine: (addons-133977) Creating domain...
	I0116 02:27:35.855572  338598 main.go:141] libmachine: (addons-133977) define libvirt domain using xml: 
	I0116 02:27:35.855633  338598 main.go:141] libmachine: (addons-133977) <domain type='kvm'>
	I0116 02:27:35.855646  338598 main.go:141] libmachine: (addons-133977)   <name>addons-133977</name>
	I0116 02:27:35.855652  338598 main.go:141] libmachine: (addons-133977)   <memory unit='MiB'>4000</memory>
	I0116 02:27:35.855659  338598 main.go:141] libmachine: (addons-133977)   <vcpu>2</vcpu>
	I0116 02:27:35.855668  338598 main.go:141] libmachine: (addons-133977)   <features>
	I0116 02:27:35.855675  338598 main.go:141] libmachine: (addons-133977)     <acpi/>
	I0116 02:27:35.855684  338598 main.go:141] libmachine: (addons-133977)     <apic/>
	I0116 02:27:35.855723  338598 main.go:141] libmachine: (addons-133977)     <pae/>
	I0116 02:27:35.855750  338598 main.go:141] libmachine: (addons-133977)     
	I0116 02:27:35.855764  338598 main.go:141] libmachine: (addons-133977)   </features>
	I0116 02:27:35.855777  338598 main.go:141] libmachine: (addons-133977)   <cpu mode='host-passthrough'>
	I0116 02:27:35.855789  338598 main.go:141] libmachine: (addons-133977)   
	I0116 02:27:35.855797  338598 main.go:141] libmachine: (addons-133977)   </cpu>
	I0116 02:27:35.855804  338598 main.go:141] libmachine: (addons-133977)   <os>
	I0116 02:27:35.855822  338598 main.go:141] libmachine: (addons-133977)     <type>hvm</type>
	I0116 02:27:35.855833  338598 main.go:141] libmachine: (addons-133977)     <boot dev='cdrom'/>
	I0116 02:27:35.855846  338598 main.go:141] libmachine: (addons-133977)     <boot dev='hd'/>
	I0116 02:27:35.855860  338598 main.go:141] libmachine: (addons-133977)     <bootmenu enable='no'/>
	I0116 02:27:35.855873  338598 main.go:141] libmachine: (addons-133977)   </os>
	I0116 02:27:35.855886  338598 main.go:141] libmachine: (addons-133977)   <devices>
	I0116 02:27:35.855909  338598 main.go:141] libmachine: (addons-133977)     <disk type='file' device='cdrom'>
	I0116 02:27:35.855928  338598 main.go:141] libmachine: (addons-133977)       <source file='/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/boot2docker.iso'/>
	I0116 02:27:35.855941  338598 main.go:141] libmachine: (addons-133977)       <target dev='hdc' bus='scsi'/>
	I0116 02:27:35.855954  338598 main.go:141] libmachine: (addons-133977)       <readonly/>
	I0116 02:27:35.855969  338598 main.go:141] libmachine: (addons-133977)     </disk>
	I0116 02:27:35.855983  338598 main.go:141] libmachine: (addons-133977)     <disk type='file' device='disk'>
	I0116 02:27:35.855996  338598 main.go:141] libmachine: (addons-133977)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:27:35.856012  338598 main.go:141] libmachine: (addons-133977)       <source file='/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/addons-133977.rawdisk'/>
	I0116 02:27:35.856025  338598 main.go:141] libmachine: (addons-133977)       <target dev='hda' bus='virtio'/>
	I0116 02:27:35.856051  338598 main.go:141] libmachine: (addons-133977)     </disk>
	I0116 02:27:35.856078  338598 main.go:141] libmachine: (addons-133977)     <interface type='network'>
	I0116 02:27:35.856101  338598 main.go:141] libmachine: (addons-133977)       <source network='mk-addons-133977'/>
	I0116 02:27:35.856130  338598 main.go:141] libmachine: (addons-133977)       <model type='virtio'/>
	I0116 02:27:35.856146  338598 main.go:141] libmachine: (addons-133977)     </interface>
	I0116 02:27:35.856160  338598 main.go:141] libmachine: (addons-133977)     <interface type='network'>
	I0116 02:27:35.856176  338598 main.go:141] libmachine: (addons-133977)       <source network='default'/>
	I0116 02:27:35.856198  338598 main.go:141] libmachine: (addons-133977)       <model type='virtio'/>
	I0116 02:27:35.856238  338598 main.go:141] libmachine: (addons-133977)     </interface>
	I0116 02:27:35.856253  338598 main.go:141] libmachine: (addons-133977)     <serial type='pty'>
	I0116 02:27:35.856268  338598 main.go:141] libmachine: (addons-133977)       <target port='0'/>
	I0116 02:27:35.856281  338598 main.go:141] libmachine: (addons-133977)     </serial>
	I0116 02:27:35.856293  338598 main.go:141] libmachine: (addons-133977)     <console type='pty'>
	I0116 02:27:35.856313  338598 main.go:141] libmachine: (addons-133977)       <target type='serial' port='0'/>
	I0116 02:27:35.856328  338598 main.go:141] libmachine: (addons-133977)     </console>
	I0116 02:27:35.856342  338598 main.go:141] libmachine: (addons-133977)     <rng model='virtio'>
	I0116 02:27:35.856364  338598 main.go:141] libmachine: (addons-133977)       <backend model='random'>/dev/random</backend>
	I0116 02:27:35.856383  338598 main.go:141] libmachine: (addons-133977)     </rng>
	I0116 02:27:35.856403  338598 main.go:141] libmachine: (addons-133977)     
	I0116 02:27:35.856416  338598 main.go:141] libmachine: (addons-133977)     
	I0116 02:27:35.856431  338598 main.go:141] libmachine: (addons-133977)   </devices>
	I0116 02:27:35.856442  338598 main.go:141] libmachine: (addons-133977) </domain>
	I0116 02:27:35.856458  338598 main.go:141] libmachine: (addons-133977) 
	I0116 02:27:35.860952  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:92:d6:fb in network default
	I0116 02:27:35.861647  338598 main.go:141] libmachine: (addons-133977) Ensuring networks are active...
	I0116 02:27:35.861677  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:35.862388  338598 main.go:141] libmachine: (addons-133977) Ensuring network default is active
	I0116 02:27:35.862719  338598 main.go:141] libmachine: (addons-133977) Ensuring network mk-addons-133977 is active
	I0116 02:27:35.863229  338598 main.go:141] libmachine: (addons-133977) Getting domain xml...
	I0116 02:27:35.863962  338598 main.go:141] libmachine: (addons-133977) Creating domain...
	I0116 02:27:37.080975  338598 main.go:141] libmachine: (addons-133977) Waiting to get IP...
	I0116 02:27:37.081927  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:37.082365  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:37.082439  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:37.082361  338620 retry.go:31] will retry after 276.418996ms: waiting for machine to come up
	I0116 02:27:37.360901  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:37.361434  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:37.361476  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:37.361380  338620 retry.go:31] will retry after 385.635421ms: waiting for machine to come up
	I0116 02:27:37.749296  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:37.749720  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:37.749741  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:37.749692  338620 retry.go:31] will retry after 341.783466ms: waiting for machine to come up
	I0116 02:27:38.094507  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:38.094977  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:38.095000  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:38.094922  338620 retry.go:31] will retry after 582.014816ms: waiting for machine to come up
	I0116 02:27:38.679010  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:38.679451  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:38.679480  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:38.679390  338620 retry.go:31] will retry after 539.81674ms: waiting for machine to come up
	I0116 02:27:39.221381  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:39.221820  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:39.221881  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:39.221793  338620 retry.go:31] will retry after 920.727136ms: waiting for machine to come up
	I0116 02:27:40.143707  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:40.144158  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:40.144182  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:40.144116  338620 retry.go:31] will retry after 868.251931ms: waiting for machine to come up
	I0116 02:27:41.014287  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:41.014871  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:41.014907  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:41.014793  338620 retry.go:31] will retry after 1.419523188s: waiting for machine to come up
	I0116 02:27:42.436356  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:42.436833  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:42.436858  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:42.436783  338620 retry.go:31] will retry after 1.717561788s: waiting for machine to come up
	I0116 02:27:44.155732  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:44.156151  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:44.156195  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:44.156085  338620 retry.go:31] will retry after 2.128505889s: waiting for machine to come up
	I0116 02:27:46.285957  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:46.286506  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:46.286543  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:46.286448  338620 retry.go:31] will retry after 1.925904544s: waiting for machine to come up
	I0116 02:27:48.214610  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:48.215000  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:48.215032  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:48.214961  338620 retry.go:31] will retry after 2.913391274s: waiting for machine to come up
	I0116 02:27:51.129893  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:51.130384  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:51.130409  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:51.130342  338620 retry.go:31] will retry after 4.123320574s: waiting for machine to come up
	I0116 02:27:55.255012  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:55.255347  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find current IP address of domain addons-133977 in network mk-addons-133977
	I0116 02:27:55.255382  338598 main.go:141] libmachine: (addons-133977) DBG | I0116 02:27:55.255323  338620 retry.go:31] will retry after 4.232842799s: waiting for machine to come up
	I0116 02:27:59.492859  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:59.493279  338598 main.go:141] libmachine: (addons-133977) Found IP for machine: 192.168.39.10
	I0116 02:27:59.493302  338598 main.go:141] libmachine: (addons-133977) Reserving static IP address...
	I0116 02:27:59.493317  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has current primary IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:59.493703  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find host DHCP lease matching {name: "addons-133977", mac: "52:54:00:f8:06:14", ip: "192.168.39.10"} in network mk-addons-133977
	I0116 02:27:59.574058  338598 main.go:141] libmachine: (addons-133977) DBG | Getting to WaitForSSH function...
	I0116 02:27:59.574087  338598 main.go:141] libmachine: (addons-133977) Reserved static IP address: 192.168.39.10
	I0116 02:27:59.574096  338598 main.go:141] libmachine: (addons-133977) Waiting for SSH to be available...
	I0116 02:27:59.577001  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:27:59.577452  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977
	I0116 02:27:59.577478  338598 main.go:141] libmachine: (addons-133977) DBG | unable to find defined IP address of network mk-addons-133977 interface with MAC address 52:54:00:f8:06:14
	I0116 02:27:59.577628  338598 main.go:141] libmachine: (addons-133977) DBG | Using SSH client type: external
	I0116 02:27:59.577655  338598 main.go:141] libmachine: (addons-133977) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa (-rw-------)
	I0116 02:27:59.577689  338598 main.go:141] libmachine: (addons-133977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:27:59.577705  338598 main.go:141] libmachine: (addons-133977) DBG | About to run SSH command:
	I0116 02:27:59.577719  338598 main.go:141] libmachine: (addons-133977) DBG | exit 0
	I0116 02:27:59.581633  338598 main.go:141] libmachine: (addons-133977) DBG | SSH cmd err, output: exit status 255: 
	I0116 02:27:59.581674  338598 main.go:141] libmachine: (addons-133977) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0116 02:27:59.581690  338598 main.go:141] libmachine: (addons-133977) DBG | command : exit 0
	I0116 02:27:59.581717  338598 main.go:141] libmachine: (addons-133977) DBG | err     : exit status 255
	I0116 02:27:59.581734  338598 main.go:141] libmachine: (addons-133977) DBG | output  : 
	I0116 02:28:02.584385  338598 main.go:141] libmachine: (addons-133977) DBG | Getting to WaitForSSH function...
	I0116 02:28:02.586896  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.587350  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:02.587383  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.587564  338598 main.go:141] libmachine: (addons-133977) DBG | Using SSH client type: external
	I0116 02:28:02.587632  338598 main.go:141] libmachine: (addons-133977) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa (-rw-------)
	I0116 02:28:02.587674  338598 main.go:141] libmachine: (addons-133977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:28:02.587705  338598 main.go:141] libmachine: (addons-133977) DBG | About to run SSH command:
	I0116 02:28:02.587719  338598 main.go:141] libmachine: (addons-133977) DBG | exit 0
	I0116 02:28:02.678590  338598 main.go:141] libmachine: (addons-133977) DBG | SSH cmd err, output: <nil>: 
	I0116 02:28:02.678878  338598 main.go:141] libmachine: (addons-133977) KVM machine creation complete!
	I0116 02:28:02.679226  338598 main.go:141] libmachine: (addons-133977) Calling .GetConfigRaw
	I0116 02:28:02.679831  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:02.680049  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:02.680248  338598 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:28:02.680269  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:02.681807  338598 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:28:02.681825  338598 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:28:02.681835  338598 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:28:02.681844  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:02.684362  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.684708  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:02.684737  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.684860  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:02.685063  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.685246  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.685425  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:02.685562  338598 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:02.685937  338598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0116 02:28:02.685956  338598 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:28:02.809768  338598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:28:02.809805  338598 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:28:02.809816  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:02.812760  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.813248  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:02.813271  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.813442  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:02.813668  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.813866  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.814031  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:02.814227  338598 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:02.814714  338598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0116 02:28:02.814737  338598 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:28:02.939134  338598 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:28:02.939239  338598 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:28:02.939249  338598 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:28:02.939259  338598 main.go:141] libmachine: (addons-133977) Calling .GetMachineName
	I0116 02:28:02.939554  338598 buildroot.go:166] provisioning hostname "addons-133977"
	I0116 02:28:02.939583  338598 main.go:141] libmachine: (addons-133977) Calling .GetMachineName
	I0116 02:28:02.939813  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:02.942312  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.942634  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:02.942692  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:02.942856  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:02.943011  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.943191  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:02.943306  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:02.943472  338598 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:02.943887  338598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0116 02:28:02.943906  338598 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-133977 && echo "addons-133977" | sudo tee /etc/hostname
	I0116 02:28:03.079506  338598 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133977
	
	I0116 02:28:03.079546  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.082715  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.083156  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.083185  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.083370  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.083595  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.083794  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.083962  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.084139  338598 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:03.084565  338598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0116 02:28:03.084586  338598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-133977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-133977/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-133977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:28:03.219215  338598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:28:03.219257  338598 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-330687/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-330687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-330687/.minikube}
	I0116 02:28:03.219308  338598 buildroot.go:174] setting up certificates
	I0116 02:28:03.219326  338598 provision.go:83] configureAuth start
	I0116 02:28:03.219349  338598 main.go:141] libmachine: (addons-133977) Calling .GetMachineName
	I0116 02:28:03.219675  338598 main.go:141] libmachine: (addons-133977) Calling .GetIP
	I0116 02:28:03.222795  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.223173  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.223211  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.223366  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.225566  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.225937  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.225962  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.226111  338598 provision.go:138] copyHostCerts
	I0116 02:28:03.226200  338598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-330687/.minikube/cert.pem (1123 bytes)
	I0116 02:28:03.226346  338598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-330687/.minikube/key.pem (1679 bytes)
	I0116 02:28:03.226461  338598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-330687/.minikube/ca.pem (1082 bytes)
	I0116 02:28:03.226552  338598 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-330687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca-key.pem org=jenkins.addons-133977 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube addons-133977]
	I0116 02:28:03.309630  338598 provision.go:172] copyRemoteCerts
	I0116 02:28:03.309730  338598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:28:03.309763  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.313116  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.313510  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.313546  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.313733  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.313984  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.314133  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.314342  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:03.408510  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:28:03.433340  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:28:03.458062  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:28:03.482909  338598 provision.go:86] duration metric: configureAuth took 263.561302ms
	I0116 02:28:03.482947  338598 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:28:03.483169  338598 config.go:182] Loaded profile config "addons-133977": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:28:03.483195  338598 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:28:03.483211  338598 main.go:141] libmachine: (addons-133977) Calling .GetURL
	I0116 02:28:03.484504  338598 main.go:141] libmachine: (addons-133977) DBG | Using libvirt version 6000000
	I0116 02:28:03.486711  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.487085  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.487118  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.487355  338598 main.go:141] libmachine: Docker is up and running!
	I0116 02:28:03.487376  338598 main.go:141] libmachine: Reticulating splines...
	I0116 02:28:03.487385  338598 client.go:171] LocalClient.Create took 28.576245568s
	I0116 02:28:03.487412  338598 start.go:167] duration metric: libmachine.API.Create for "addons-133977" took 28.576323766s
	I0116 02:28:03.487426  338598 start.go:300] post-start starting for "addons-133977" (driver="kvm2")
	I0116 02:28:03.487442  338598 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:28:03.487468  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:03.487715  338598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:28:03.487766  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.489900  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.490220  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.490250  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.490440  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.490632  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.490805  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.490915  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:03.584630  338598 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:28:03.588759  338598 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:28:03.588789  338598 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-330687/.minikube/addons for local assets ...
	I0116 02:28:03.588861  338598 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-330687/.minikube/files for local assets ...
	I0116 02:28:03.588897  338598 start.go:303] post-start completed in 101.461451ms
	I0116 02:28:03.588931  338598 main.go:141] libmachine: (addons-133977) Calling .GetConfigRaw
	I0116 02:28:03.589522  338598 main.go:141] libmachine: (addons-133977) Calling .GetIP
	I0116 02:28:03.592556  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.592965  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.592997  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.593239  338598 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/config.json ...
	I0116 02:28:03.593425  338598 start.go:128] duration metric: createHost completed in 28.701054682s
	I0116 02:28:03.593449  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.595906  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.596237  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.596265  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.596427  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.596663  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.596833  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.597043  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.597242  338598 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:03.597558  338598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0116 02:28:03.597570  338598 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:28:03.723172  338598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705372083.710573693
	
	I0116 02:28:03.723198  338598 fix.go:206] guest clock: 1705372083.710573693
	I0116 02:28:03.723205  338598 fix.go:219] Guest: 2024-01-16 02:28:03.710573693 +0000 UTC Remote: 2024-01-16 02:28:03.59343671 +0000 UTC m=+28.823664645 (delta=117.136983ms)
	I0116 02:28:03.723226  338598 fix.go:190] guest clock delta is within tolerance: 117.136983ms
	I0116 02:28:03.723231  338598 start.go:83] releasing machines lock for "addons-133977", held for 28.830981647s
	I0116 02:28:03.723254  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:03.723595  338598 main.go:141] libmachine: (addons-133977) Calling .GetIP
	I0116 02:28:03.726463  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.726865  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.726885  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.727042  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:03.727647  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:03.727820  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:03.727932  338598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:28:03.727974  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.728066  338598 ssh_runner.go:195] Run: cat /version.json
	I0116 02:28:03.728098  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:03.730582  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.730886  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.730950  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.730996  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.731069  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.731249  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.731318  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:03.731343  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:03.731405  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.731477  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:03.731582  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:03.731629  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:03.731771  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:03.731926  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:03.843521  338598 ssh_runner.go:195] Run: systemctl --version
	I0116 02:28:03.849582  338598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:28:03.855305  338598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:28:03.855395  338598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:28:03.873001  338598 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:28:03.873030  338598 start.go:475] detecting cgroup driver to use...
	I0116 02:28:03.873092  338598 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:28:03.906925  338598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:28:03.920142  338598 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:28:03.920223  338598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:28:03.933948  338598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:28:03.947603  338598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:28:04.072823  338598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:28:04.199770  338598 docker.go:233] disabling docker service ...
	I0116 02:28:04.199868  338598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:28:04.213778  338598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:28:04.226022  338598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:28:04.340429  338598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:28:04.445375  338598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:28:04.458581  338598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:28:04.476212  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 02:28:04.486762  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:28:04.496324  338598 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:28:04.496405  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:28:04.506132  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:28:04.515921  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:28:04.525704  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:28:04.535587  338598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:28:04.545574  338598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:28:04.555336  338598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:28:04.564447  338598 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:28:04.564524  338598 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:28:04.578059  338598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:28:04.586698  338598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:28:04.705269  338598 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:28:04.734679  338598 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0116 02:28:04.734795  338598 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 02:28:04.739898  338598 retry.go:31] will retry after 823.068853ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0116 02:28:05.563967  338598 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 02:28:05.569613  338598 start.go:543] Will wait 60s for crictl version
	I0116 02:28:05.569717  338598 ssh_runner.go:195] Run: which crictl
	I0116 02:28:05.573737  338598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:28:05.614232  338598 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0116 02:28:05.614336  338598 ssh_runner.go:195] Run: containerd --version
	I0116 02:28:05.646932  338598 ssh_runner.go:195] Run: containerd --version
	I0116 02:28:05.677823  338598 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0116 02:28:05.679528  338598 main.go:141] libmachine: (addons-133977) Calling .GetIP
	I0116 02:28:05.682377  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:05.682750  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:05.682784  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:05.683033  338598 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:28:05.687264  338598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:28:05.700368  338598 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:28:05.700439  338598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:28:05.734787  338598 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:28:05.734872  338598 ssh_runner.go:195] Run: which lz4
	I0116 02:28:05.738722  338598 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:28:05.742783  338598 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:28:05.742820  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0116 02:28:07.529974  338598 containerd.go:548] Took 1.791254 seconds to copy over tarball
	I0116 02:28:07.530057  338598 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:28:11.006797  338598 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.476701851s)
	I0116 02:28:11.006833  338598 containerd.go:555] Took 3.476826 seconds to extract the tarball
	I0116 02:28:11.006848  338598 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:28:11.047502  338598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:28:11.157592  338598 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:28:11.186575  338598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:28:11.226497  338598 retry.go:31] will retry after 202.516944ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T02:28:11Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0116 02:28:11.430034  338598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:28:11.472653  338598 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 02:28:11.472691  338598 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:28:11.472753  338598 ssh_runner.go:195] Run: sudo crictl info
	I0116 02:28:11.515456  338598 cni.go:84] Creating CNI manager for ""
	I0116 02:28:11.515485  338598 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 02:28:11.515504  338598 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:28:11.515527  338598 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-133977 NodeName:addons-133977 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:28:11.515702  338598 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-133977"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:28:11.515872  338598 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-133977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-133977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:28:11.515947  338598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:28:11.526228  338598 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:28:11.526328  338598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:28:11.535811  338598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0116 02:28:11.552673  338598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:28:11.570021  338598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 02:28:11.586799  338598 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0116 02:28:11.590708  338598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:28:11.602654  338598 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977 for IP: 192.168.39.10
	I0116 02:28:11.602713  338598 certs.go:190] acquiring lock for shared ca certs: {Name:mk8f3d606c5ec22a812500455a13e814ad125d74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.602998  338598 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.key
	I0116 02:28:11.666672  338598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt ...
	I0116 02:28:11.666707  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt: {Name:mk7049578aa894431993ecf91ac08c2acffb8745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.666890  338598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-330687/.minikube/ca.key ...
	I0116 02:28:11.666901  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/ca.key: {Name:mkde9b2a06a33ececf01835ee00ef0a9962664be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.666970  338598 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.key
	I0116 02:28:11.769658  338598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.crt ...
	I0116 02:28:11.769695  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.crt: {Name:mkbd8f3a79c7df3a5fc1fcf533606b5749ddae1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.769870  338598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.key ...
	I0116 02:28:11.769882  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.key: {Name:mk2881ded6ef29c98df6ef2f2ab79803e05f67d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.769985  338598 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.key
	I0116 02:28:11.770003  338598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt with IP's: []
	I0116 02:28:11.895802  338598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt ...
	I0116 02:28:11.895838  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: {Name:mk5833d9dcc2064b1858e3a14ec75585beefaee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.896007  338598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.key ...
	I0116 02:28:11.896017  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.key: {Name:mk81b4a5edfb3b9dc0b36d2f0e30b22562f3a296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:11.896090  338598 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key.713efbbe
	I0116 02:28:11.896109  338598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt.713efbbe with IP's: [192.168.39.10 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:28:12.175640  338598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt.713efbbe ...
	I0116 02:28:12.175684  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt.713efbbe: {Name:mkbdffbc5b8d8291ad0ef0cde74f3f68ebd20d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:12.175894  338598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key.713efbbe ...
	I0116 02:28:12.175916  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key.713efbbe: {Name:mkf4e54fd30aa2e6711ef5bd510b0f73727a6d60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:12.176020  338598 certs.go:337] copying /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt.713efbbe -> /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt
	I0116 02:28:12.176106  338598 certs.go:341] copying /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key.713efbbe -> /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key
	I0116 02:28:12.176151  338598 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.key
	I0116 02:28:12.176169  338598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.crt with IP's: []
	I0116 02:28:12.359910  338598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.crt ...
	I0116 02:28:12.359948  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.crt: {Name:mkadf35b671d6214b0c16614865cd60484508ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:12.360164  338598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.key ...
	I0116 02:28:12.360182  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.key: {Name:mka5a829fae63dde043ebea587e6b08c04d219b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:12.360389  338598 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:28:12.360436  338598 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/home/jenkins/minikube-integration/17965-330687/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:28:12.360465  338598 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/home/jenkins/minikube-integration/17965-330687/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:28:12.360492  338598 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-330687/.minikube/certs/home/jenkins/minikube-integration/17965-330687/.minikube/certs/key.pem (1679 bytes)
	I0116 02:28:12.361238  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:28:12.387392  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:28:12.410582  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:28:12.433916  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:28:12.457204  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:28:12.480418  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:28:12.503295  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:28:12.530071  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 02:28:12.555144  338598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:28:12.580486  338598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:28:12.597401  338598 ssh_runner.go:195] Run: openssl version
	I0116 02:28:12.603255  338598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:28:12.614323  338598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:28:12.619184  338598 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:28 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:28:12.619255  338598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:28:12.625122  338598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:28:12.636349  338598 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:28:12.640693  338598 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:28:12.640754  338598 kubeadm.go:404] StartCluster: {Name:addons-133977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-133977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:28:12.640905  338598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 02:28:12.640971  338598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:28:12.684337  338598 cri.go:89] found id: ""
	I0116 02:28:12.684415  338598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:28:12.694510  338598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:28:12.704616  338598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:28:12.714353  338598 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:28:12.714430  338598 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:28:12.925159  338598 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:28:25.623811  338598 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:28:25.623935  338598 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:28:25.624060  338598 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:28:25.624189  338598 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:28:25.624332  338598 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:28:25.624423  338598 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:28:25.626310  338598 out.go:204]   - Generating certificates and keys ...
	I0116 02:28:25.626414  338598 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:28:25.626512  338598 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:28:25.626619  338598 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:28:25.626705  338598 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:28:25.626762  338598 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:28:25.626815  338598 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:28:25.626871  338598 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:28:25.627009  338598 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-133977 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I0116 02:28:25.627085  338598 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:28:25.627230  338598 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-133977 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I0116 02:28:25.627326  338598 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:28:25.627415  338598 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:28:25.627499  338598 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:28:25.627594  338598 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:28:25.627666  338598 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:28:25.627745  338598 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:28:25.627827  338598 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:28:25.627875  338598 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:28:25.627973  338598 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:28:25.628086  338598 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:28:25.629527  338598 out.go:204]   - Booting up control plane ...
	I0116 02:28:25.629615  338598 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:28:25.629699  338598 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:28:25.629783  338598 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:28:25.629942  338598 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:28:25.630077  338598 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:28:25.630143  338598 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:28:25.630327  338598 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:28:25.630480  338598 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504635 seconds
	I0116 02:28:25.630620  338598 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:28:25.630793  338598 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:28:25.630885  338598 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:28:25.631062  338598 kubeadm.go:322] [mark-control-plane] Marking the node addons-133977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:28:25.631114  338598 kubeadm.go:322] [bootstrap-token] Using token: 4kvqv3.hp4tuz7z3o9decml
	I0116 02:28:25.632641  338598 out.go:204]   - Configuring RBAC rules ...
	I0116 02:28:25.632740  338598 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:28:25.632813  338598 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:28:25.632926  338598 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:28:25.633028  338598 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:28:25.633122  338598 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:28:25.633210  338598 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:28:25.633347  338598 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:28:25.633398  338598 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:28:25.633460  338598 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:28:25.633470  338598 kubeadm.go:322] 
	I0116 02:28:25.633541  338598 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:28:25.633547  338598 kubeadm.go:322] 
	I0116 02:28:25.633670  338598 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:28:25.633688  338598 kubeadm.go:322] 
	I0116 02:28:25.633726  338598 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:28:25.633783  338598 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:28:25.633830  338598 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:28:25.633836  338598 kubeadm.go:322] 
	I0116 02:28:25.633881  338598 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:28:25.633888  338598 kubeadm.go:322] 
	I0116 02:28:25.633940  338598 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:28:25.633946  338598 kubeadm.go:322] 
	I0116 02:28:25.633986  338598 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:28:25.634047  338598 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:28:25.634118  338598 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:28:25.634125  338598 kubeadm.go:322] 
	I0116 02:28:25.634207  338598 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:28:25.634302  338598 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:28:25.634316  338598 kubeadm.go:322] 
	I0116 02:28:25.634451  338598 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4kvqv3.hp4tuz7z3o9decml \
	I0116 02:28:25.634587  338598 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4ce7931e0c8e4b8b85f6aa30de93a826a2198229a57a8662adb6a737d6f7ef7d \
	I0116 02:28:25.634610  338598 kubeadm.go:322] 	--control-plane 
	I0116 02:28:25.634614  338598 kubeadm.go:322] 
	I0116 02:28:25.634710  338598 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:28:25.634727  338598 kubeadm.go:322] 
	I0116 02:28:25.634815  338598 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4kvqv3.hp4tuz7z3o9decml \
	I0116 02:28:25.634956  338598 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4ce7931e0c8e4b8b85f6aa30de93a826a2198229a57a8662adb6a737d6f7ef7d 
	I0116 02:28:25.634972  338598 cni.go:84] Creating CNI manager for ""
	I0116 02:28:25.634979  338598 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 02:28:25.636856  338598 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 02:28:25.638108  338598 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 02:28:25.651819  338598 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 02:28:25.685669  338598 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:28:25.685747  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:25.685785  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=addons-133977 minikube.k8s.io/updated_at=2024_01_16T02_28_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:25.956059  338598 ops.go:34] apiserver oom_adj: -16
	I0116 02:28:25.956160  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:26.456518  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:26.957093  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:27.456958  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:27.956973  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:28.456944  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:28.956822  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:29.457166  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:29.956605  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:30.457009  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:30.956526  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:31.456532  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:31.956862  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:32.456554  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:32.956588  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:33.456410  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:33.956759  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:34.456963  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:34.956982  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:35.456288  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:35.956877  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:36.456340  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:36.956895  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:37.456975  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:37.956758  338598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:28:38.079465  338598 kubeadm.go:1088] duration metric: took 12.393783725s to wait for elevateKubeSystemPrivileges.
	I0116 02:28:38.079509  338598 kubeadm.go:406] StartCluster complete in 25.438763054s
	I0116 02:28:38.079549  338598 settings.go:142] acquiring lock: {Name:mk42e3205e6ad1074a28d92f03040106b3392be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:38.079686  338598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:28:38.080073  338598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/kubeconfig: {Name:mke464b512b6ca14c3d12684020331505daf4c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:28:38.080297  338598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:28:38.080435  338598 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:28:38.080544  338598 addons.go:69] Setting yakd=true in profile "addons-133977"
	I0116 02:28:38.080558  338598 addons.go:69] Setting gcp-auth=true in profile "addons-133977"
	I0116 02:28:38.080565  338598 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-133977"
	I0116 02:28:38.080596  338598 config.go:182] Loaded profile config "addons-133977": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:28:38.080604  338598 addons.go:234] Setting addon yakd=true in "addons-133977"
	I0116 02:28:38.080588  338598 addons.go:69] Setting default-storageclass=true in profile "addons-133977"
	I0116 02:28:38.080614  338598 addons.go:69] Setting storage-provisioner=true in profile "addons-133977"
	I0116 02:28:38.080630  338598 mustload.go:65] Loading cluster: addons-133977
	I0116 02:28:38.080633  338598 addons.go:69] Setting cloud-spanner=true in profile "addons-133977"
	I0116 02:28:38.080635  338598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-133977"
	I0116 02:28:38.080642  338598 addons.go:234] Setting addon storage-provisioner=true in "addons-133977"
	I0116 02:28:38.080643  338598 addons.go:234] Setting addon cloud-spanner=true in "addons-133977"
	I0116 02:28:38.080644  338598 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-133977"
	I0116 02:28:38.080662  338598 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-133977"
	I0116 02:28:38.080680  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.080688  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.080694  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.080833  338598 config.go:182] Loaded profile config "addons-133977": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:28:38.081065  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081089  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.080628  338598 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-133977"
	I0116 02:28:38.081138  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081130  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081146  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081151  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081158  338598 addons.go:69] Setting ingress=true in profile "addons-133977"
	I0116 02:28:38.081148  338598 addons.go:69] Setting volumesnapshots=true in profile "addons-133977"
	I0116 02:28:38.081170  338598 addons.go:69] Setting helm-tiller=true in profile "addons-133977"
	I0116 02:28:38.081171  338598 addons.go:234] Setting addon ingress=true in "addons-133977"
	I0116 02:28:38.081190  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081175  338598 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-133977"
	I0116 02:28:38.081208  338598 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-133977"
	I0116 02:28:38.081215  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.081180  338598 addons.go:234] Setting addon volumesnapshots=true in "addons-133977"
	I0116 02:28:38.081182  338598 addons.go:69] Setting ingress-dns=true in profile "addons-133977"
	I0116 02:28:38.081248  338598 addons.go:234] Setting addon ingress-dns=true in "addons-133977"
	I0116 02:28:38.081277  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081294  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.081297  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081319  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081281  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.081327  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.081377  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081415  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081575  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081588  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.081611  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081645  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081665  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081678  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081694  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081726  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.081189  338598 addons.go:69] Setting inspektor-gadget=true in profile "addons-133977"
	I0116 02:28:38.081750  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.081754  338598 addons.go:234] Setting addon inspektor-gadget=true in "addons-133977"
	I0116 02:28:38.081193  338598 addons.go:69] Setting registry=true in profile "addons-133977"
	I0116 02:28:38.081769  338598 addons.go:234] Setting addon registry=true in "addons-133977"
	I0116 02:28:38.081187  338598 addons.go:69] Setting metrics-server=true in profile "addons-133977"
	I0116 02:28:38.081784  338598 addons.go:234] Setting addon metrics-server=true in "addons-133977"
	I0116 02:28:38.081182  338598 addons.go:234] Setting addon helm-tiller=true in "addons-133977"
	I0116 02:28:38.082052  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.082061  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.082447  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.082512  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.082470  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.082544  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.082619  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.082652  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.082738  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.082522  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.083040  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.083079  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.102258  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0116 02:28:38.102299  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0116 02:28:38.102258  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0116 02:28:38.102498  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0116 02:28:38.103051  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.103150  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.103229  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.103300  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.103368  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0116 02:28:38.103568  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.103583  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.103743  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.103786  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.104008  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.104027  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.104102  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.104238  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.104253  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.104619  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.104770  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.104782  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.104829  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.104861  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0116 02:28:38.105005  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.105022  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.105098  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.105152  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.105454  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.105490  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.105548  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.105627  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.105658  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.105636  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.106217  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.106251  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.110855  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.111197  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.111225  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.111515  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.111543  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.112580  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.113188  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.113242  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.113719  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.114068  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.114091  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.130101  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0116 02:28:38.135108  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.135864  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.135893  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.136514  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.136760  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.141668  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
	I0116 02:28:38.142484  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.143115  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.143141  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.144732  338598 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-133977"
	I0116 02:28:38.144784  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.145186  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.145226  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.145494  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.146068  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.146113  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.147787  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0116 02:28:38.148442  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0116 02:28:38.148799  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.149300  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.149323  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.149673  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.150148  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0116 02:28:38.150200  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.150247  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.150826  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.151431  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.151465  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.151859  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.152482  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.152529  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.152872  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0116 02:28:38.153297  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.153825  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.153847  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.154208  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.154368  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0116 02:28:38.154768  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.154847  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.154879  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.154988  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0116 02:28:38.155356  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.155371  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.155442  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.156034  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.156122  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.156146  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.156529  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.156599  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.156791  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.157349  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0116 02:28:38.157802  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.158439  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.158466  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.158949  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.159232  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.159864  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.159873  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.159941  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.164112  338598 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:28:38.161237  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.161292  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.162759  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0116 02:28:38.164670  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0116 02:28:38.165644  338598 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:28:38.165660  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.165663  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:28:38.165687  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.166776  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.168229  338598 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:28:38.166895  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.166911  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.167400  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.168381  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.168514  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0116 02:28:38.169699  338598 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:28:38.169721  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:28:38.169751  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.168673  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 02:28:38.168927  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.169940  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.169013  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.169250  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.170062  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.170533  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.170540  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.170597  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.172145  338598 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:28:38.171188  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.171233  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.171342  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.172107  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.173318  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.173479  338598 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:28:38.173492  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:28:38.173510  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.173547  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.173603  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.173646  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.173667  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.174486  338598 addons.go:234] Setting addon default-storageclass=true in "addons-133977"
	I0116 02:28:38.174532  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:38.174896  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.174925  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.174945  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.174975  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.175045  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.175109  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.175284  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.175337  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.175370  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.176313  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.176354  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.176552  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.176989  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.177065  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.177249  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.178215  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.178257  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.178270  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.178792  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.178840  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.179297  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.179374  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.179390  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.179904  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.181794  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:28:38.180237  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.182092  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0116 02:28:38.186360  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:28:38.184888  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.185763  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.189368  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:28:38.188454  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.189107  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.190617  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.192252  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:28:38.194922  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:28:38.191807  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.194888  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0116 02:28:38.192861  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0116 02:28:38.197657  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:28:38.196749  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.198899  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:28:38.197195  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.197588  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.201561  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:28:38.200374  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0116 02:28:38.200868  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.200929  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.202832  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.202905  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.202992  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:28:38.203014  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:28:38.203036  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.203565  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.203585  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.204326  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.204332  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.204367  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.204761  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.204981  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.205005  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.205271  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.205370  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.206483  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.207073  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45569
	I0116 02:28:38.207269  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.207341  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.207356  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.207386  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.207691  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.209464  338598 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:28:38.207772  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.207981  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.208265  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.208630  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I0116 02:28:38.211096  338598 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:28:38.211111  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:28:38.211138  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.211402  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0116 02:28:38.211593  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0116 02:28:38.213879  338598 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:28:38.212434  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39183
	I0116 02:28:38.212480  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.212630  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.212662  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0116 02:28:38.212782  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.213311  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.213810  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.214861  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.215293  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.215331  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.215408  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.215688  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.215779  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.215829  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.215877  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.216023  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.216530  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.216554  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.216606  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.216630  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.216690  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.216709  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.216763  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.216780  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.216788  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.216947  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.217063  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.217367  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.217428  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.217471  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.217770  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.217805  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.217841  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.217865  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.217974  338598 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:28:38.217991  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:28:38.218011  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.218254  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.219999  338598 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 02:28:38.220005  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.221473  338598 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 02:28:38.221494  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 02:28:38.221511  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.219333  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.221572  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.219785  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.218756  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.223312  338598 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:28:38.224954  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:28:38.224975  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:28:38.224996  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.227103  338598 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:28:38.223080  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.223585  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.224224  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.225085  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.225773  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.225962  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.226526  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.227539  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0116 02:28:38.228553  338598 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:28:38.228567  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:28:38.228588  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.228457  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.228711  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.228732  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.228741  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.228754  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.229326  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.231082  338598 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:28:38.229486  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.229557  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.229578  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.229795  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:38.229866  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.230043  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0116 02:28:38.230078  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.231962  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.232888  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.232905  338598 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:28:38.234895  338598 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:28:38.232928  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:38.234914  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:28:38.234929  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.236628  338598 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:28:38.232949  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.232524  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.233243  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.233255  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.233294  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.233694  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.233755  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.238346  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.238575  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.240238  338598 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:28:38.239036  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.239069  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.239097  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.239131  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.239166  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.239309  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.239315  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.241669  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.241975  338598 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:28:38.241993  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:28:38.242014  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.242096  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.242119  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.242319  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.242388  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.242635  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.242663  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.242721  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.243254  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.243265  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.245290  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.245510  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.245893  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.248436  338598 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:28:38.246808  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.247321  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.247356  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.250655  338598 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:28:38.249627  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.250697  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.249730  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.251982  338598 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:28:38.250908  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.254520  338598 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:28:38.253350  338598 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:28:38.253443  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.256328  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42451
	I0116 02:28:38.256470  338598 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:28:38.256482  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:28:38.256499  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.256471  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:28:38.256552  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.256983  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:38.257572  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:38.257598  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:38.258242  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:38.258450  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:38.260609  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.260944  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:38.260946  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.260971  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.261154  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.261207  338598 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:28:38.261222  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:28:38.261246  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:38.261341  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.261500  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.261601  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.261671  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.261969  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.261988  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.262228  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.262417  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.262637  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.262821  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:38.264374  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.264802  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:38.264834  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:38.264952  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:38.265140  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:38.265306  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:38.265478  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	W0116 02:28:38.268570  338598 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52358->192.168.39.10:22: read: connection reset by peer
	I0116 02:28:38.268598  338598 retry.go:31] will retry after 149.144272ms: ssh: handshake failed: read tcp 192.168.39.1:52358->192.168.39.10:22: read: connection reset by peer
	W0116 02:28:38.268797  338598 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52360->192.168.39.10:22: read: connection reset by peer
	I0116 02:28:38.268807  338598 retry.go:31] will retry after 207.562264ms: ssh: handshake failed: read tcp 192.168.39.1:52360->192.168.39.10:22: read: connection reset by peer
	I0116 02:28:38.587270  338598 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-133977" context rescaled to 1 replicas
	I0116 02:28:38.587336  338598 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:28:38.589098  338598 out.go:177] * Verifying Kubernetes components...
	I0116 02:28:38.590514  338598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:28:38.706360  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:28:38.722460  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:28:38.783501  338598 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:28:38.783526  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:28:38.879100  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:28:38.893707  338598 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:28:38.893738  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:28:39.024478  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:28:39.062521  338598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:28:39.083313  338598 node_ready.go:35] waiting up to 6m0s for node "addons-133977" to be "Ready" ...
	I0116 02:28:39.086842  338598 node_ready.go:49] node "addons-133977" has status "Ready":"True"
	I0116 02:28:39.086871  338598 node_ready.go:38] duration metric: took 3.513216ms waiting for node "addons-133977" to be "Ready" ...
	I0116 02:28:39.086881  338598 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:28:39.105139  338598 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:28:39.154652  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:28:39.171090  338598 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:28:39.171132  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:28:39.195483  338598 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:28:39.195520  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:28:39.321125  338598 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:28:39.321156  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:28:39.415294  338598 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:28:39.415328  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:28:39.426527  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:28:39.568890  338598 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 02:28:39.568927  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 02:28:39.655702  338598 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:28:39.655732  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:28:39.853293  338598 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:28:39.853324  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:28:39.877731  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:28:39.877748  338598 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:28:39.877769  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:28:39.885977  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:28:39.886007  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:28:40.014914  338598 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:28:40.014943  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:28:40.100147  338598 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:28:40.100179  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:28:40.274335  338598 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:28:40.274363  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:28:40.307111  338598 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:28:40.307148  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 02:28:40.331164  338598 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:28:40.331211  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:28:40.399778  338598 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:28:40.399824  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:28:40.410664  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:28:40.433399  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:28:40.515100  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:28:40.515128  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:28:40.538111  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:28:40.538139  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:28:40.575358  338598 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:28:40.575384  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:28:40.590501  338598 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:28:40.590527  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:28:40.707202  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:28:40.753664  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:28:40.931364  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:28:40.955286  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:28:40.955319  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:28:41.010418  338598 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:28:41.010474  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:28:41.135606  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:41.135961  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:28:41.135988  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:28:41.625830  338598 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:28:41.625859  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:28:41.660311  338598 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:28:41.660341  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:28:42.054617  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:28:42.054644  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:28:42.084594  338598 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:28:42.084623  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:28:42.226071  338598 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:28:42.226100  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:28:42.506759  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:28:42.506810  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:28:42.552014  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:28:42.668034  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:28:42.668062  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:28:42.909708  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:28:42.909733  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:28:43.103050  338598 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:28:43.103090  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:28:43.409836  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:28:43.613669  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:44.501284  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.794875804s)
	I0116 02:28:44.501359  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:44.501375  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:44.501728  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:44.501752  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:44.501772  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:44.501783  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:44.502150  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:44.502184  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:44.502194  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:44.711062  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.988545737s)
	I0116 02:28:44.711153  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:44.711170  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:44.711547  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:44.711578  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:44.711622  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:44.713034  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:44.713060  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:44.713495  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:44.713534  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:44.713548  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:44.810797  338598 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:28:44.810847  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:44.814394  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:44.814888  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:44.814923  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:44.815077  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:44.815312  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:44.815478  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:44.815621  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:45.614538  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:45.657056  338598 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:28:45.960494  338598 addons.go:234] Setting addon gcp-auth=true in "addons-133977"
	I0116 02:28:45.960555  338598 host.go:66] Checking if "addons-133977" exists ...
	I0116 02:28:45.960866  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:45.960909  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:45.976854  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0116 02:28:45.977401  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:45.977946  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:45.977970  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:45.978411  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:45.978964  338598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:28:45.978999  338598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:45.995644  338598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0116 02:28:45.996276  338598 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:45.996874  338598 main.go:141] libmachine: Using API Version  1
	I0116 02:28:45.996911  338598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:45.997309  338598 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:45.997539  338598 main.go:141] libmachine: (addons-133977) Calling .GetState
	I0116 02:28:45.999352  338598 main.go:141] libmachine: (addons-133977) Calling .DriverName
	I0116 02:28:45.999630  338598 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:28:45.999662  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHHostname
	I0116 02:28:46.002468  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:46.002991  338598 main.go:141] libmachine: (addons-133977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:06:14", ip: ""} in network mk-addons-133977: {Iface:virbr1 ExpiryTime:2024-01-16 03:27:51 +0000 UTC Type:0 Mac:52:54:00:f8:06:14 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:addons-133977 Clientid:01:52:54:00:f8:06:14}
	I0116 02:28:46.003026  338598 main.go:141] libmachine: (addons-133977) DBG | domain addons-133977 has defined IP address 192.168.39.10 and MAC address 52:54:00:f8:06:14 in network mk-addons-133977
	I0116 02:28:46.003370  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHPort
	I0116 02:28:46.003637  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHKeyPath
	I0116 02:28:46.003873  338598 main.go:141] libmachine: (addons-133977) Calling .GetSSHUsername
	I0116 02:28:46.004054  338598 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/addons-133977/id_rsa Username:docker}
	I0116 02:28:46.425981  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.54683912s)
	I0116 02:28:46.426037  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.401515006s)
	I0116 02:28:46.426046  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:46.426105  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:46.426122  338598 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.363562579s)
	I0116 02:28:46.426154  338598 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:28:46.426079  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:46.426182  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:46.426404  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:46.426430  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:46.426442  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:46.426452  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:46.426523  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:46.426530  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:46.426538  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:46.426549  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:46.426558  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:46.426680  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:46.426706  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:46.426737  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:46.426763  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:46.426773  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:46.426790  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:46.433419  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:46.433448  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:46.433774  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:46.433802  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:46.433816  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:47.645725  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:47.994646  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.568072078s)
	I0116 02:28:47.994710  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:47.994723  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:47.995056  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:47.995106  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:47.995122  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:47.995143  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:47.995155  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:47.995451  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:47.995498  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:47.995503  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:47.996443  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.841755003s)
	I0116 02:28:47.996474  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:47.996487  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:47.996739  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:47.996784  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:47.996805  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:47.996824  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:47.996818  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:47.997093  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:47.997110  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:47.997126  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:48.040426  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:48.040459  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:48.040728  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:48.040790  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:48.040812  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.113758  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:50.791278  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.913502015s)
	I0116 02:28:50.791344  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791358  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791378  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.380684128s)
	I0116 02:28:50.791410  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791434  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791486  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.358027107s)
	I0116 02:28:50.791537  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791554  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791563  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.084312704s)
	I0116 02:28:50.791605  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791631  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791663  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.037960076s)
	I0116 02:28:50.791692  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791702  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791789  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.86039486s)
	I0116 02:28:50.791825  338598 main.go:141] libmachine: Successfully made call to close driver server
	W0116 02:28:50.791831  338598 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:28:50.791844  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.791856  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791860  338598 retry.go:31] will retry after 370.041686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:28:50.791865  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.791929  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.239881541s)
	I0116 02:28:50.791949  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.791959  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.792058  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.792068  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.792078  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.792086  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.792156  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.792145  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.792172  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.792183  338598 addons.go:470] Verifying addon ingress=true in "addons-133977"
	I0116 02:28:50.792187  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.792214  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.792222  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.794068  338598 out.go:177] * Verifying ingress addon...
	I0116 02:28:50.792293  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.792323  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.792346  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.792350  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.792372  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.792747  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.792775  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.794202  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.794193  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.795284  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.795295  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.795307  338598 addons.go:470] Verifying addon metrics-server=true in "addons-133977"
	I0116 02:28:50.795314  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.795324  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.795335  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.795346  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.795356  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.795387  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.795411  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.795423  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:50.795433  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:50.796302  338598 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:28:50.797686  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.797700  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.797701  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.797700  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.797718  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.797726  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.797736  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.797739  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:50.797742  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.797751  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.797760  338598 addons.go:470] Verifying addon registry=true in "addons-133977"
	I0116 02:28:50.797772  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:50.797785  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:50.799433  338598 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-133977 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:28:50.801233  338598 out.go:177] * Verifying registry addon...
	I0116 02:28:50.802213  338598 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:28:50.802651  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:50.803574  338598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:28:50.815026  338598 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:28:50.815051  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:51.162581  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:28:51.302292  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:51.307804  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:51.865958  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:51.866530  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:52.147457  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:52.328303  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:52.328640  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:52.813327  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:52.824683  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:53.151039  338598 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.151376644s)
	I0116 02:28:53.152962  338598 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:28:53.154572  338598 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:28:53.155999  338598 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:28:53.156024  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:28:53.157248  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.747355242s)
	I0116 02:28:53.157308  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:53.157325  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:53.157616  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:53.157678  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:53.157691  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:53.157706  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:53.157726  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:53.158046  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:53.158108  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:53.158125  338598 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-133977"
	I0116 02:28:53.160463  338598 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:28:53.162896  338598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:28:53.179922  338598 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:28:53.179949  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:53.216742  338598 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:28:53.216786  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:28:53.301839  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:53.329591  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:53.425156  338598 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:28:53.425183  338598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:28:53.581767  338598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:28:53.669265  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:53.804574  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:53.808204  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:54.169764  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:54.238075  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.075414048s)
	I0116 02:28:54.238152  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:54.238169  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:54.238541  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:54.238595  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:54.238609  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:54.238624  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:54.238637  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:54.238889  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:54.238908  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:54.238943  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:54.305373  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:54.313164  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:54.615699  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:54.669830  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:54.804025  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:54.808741  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:55.175752  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:55.318016  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:55.318270  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:55.580747  338598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.998928167s)
	I0116 02:28:55.580817  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:55.580842  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:55.581223  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:55.581255  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:55.581265  338598 main.go:141] libmachine: Making call to close driver server
	I0116 02:28:55.581271  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:55.581274  338598 main.go:141] libmachine: (addons-133977) Calling .Close
	I0116 02:28:55.581640  338598 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:28:55.581659  338598 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:28:55.581666  338598 main.go:141] libmachine: (addons-133977) DBG | Closing plugin on server side
	I0116 02:28:55.583071  338598 addons.go:470] Verifying addon gcp-auth=true in "addons-133977"
	I0116 02:28:55.585619  338598 out.go:177] * Verifying gcp-auth addon...
	I0116 02:28:55.587564  338598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:28:55.595846  338598 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:28:55.595876  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:55.669391  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:55.802980  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:55.809828  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:56.095950  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:56.170489  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:56.301909  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:56.309210  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:56.592049  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:56.670019  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:56.802879  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:56.808796  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:57.092902  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:57.112115  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:57.169819  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:57.303530  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:57.308734  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:57.592639  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:57.669837  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:57.801739  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:57.808951  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:58.094354  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:58.169604  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:58.305711  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:58.314039  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:58.591524  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:58.670489  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:58.802132  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:58.809533  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:59.091992  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:28:59.112361  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:28:59.169421  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:28:59.301039  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:28:59.308795  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:28:59.592872  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:00.145630  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:00.145733  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:00.145974  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:00.158014  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:00.170600  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:00.301235  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:00.320481  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:00.592525  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:00.670656  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:00.802499  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:00.809718  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:01.092702  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:01.170684  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:01.301854  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:01.308628  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:01.592981  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:01.612464  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:01.669247  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:01.801532  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:01.808750  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:02.092899  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:02.173924  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:02.303065  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:02.309699  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:02.594175  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:02.669819  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:02.802365  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:02.809299  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:03.092129  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:03.168943  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:03.301589  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:03.308708  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:03.592132  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:03.672062  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:03.800788  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:03.808572  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:04.092051  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:04.111891  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:04.169469  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:04.302312  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:04.308615  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:04.592705  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:04.669054  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:04.801883  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:04.810356  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:05.094774  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:05.170968  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:05.301714  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:05.313090  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:05.713495  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:05.721884  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:05.805685  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:05.813688  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:06.093626  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:06.112018  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:06.170055  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:06.301309  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:06.308142  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:06.591714  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:06.671191  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:06.801137  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:06.809144  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:07.095700  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:07.168893  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:07.301352  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:07.308047  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:07.593745  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:07.669415  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:07.801534  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:07.808454  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:08.092474  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:08.113053  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:08.170224  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:08.301002  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:08.309079  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:08.591561  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:08.669578  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:08.800957  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:08.808793  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:09.092394  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:09.169923  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:09.302279  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:09.309235  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:09.594009  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:09.670773  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:09.801833  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:09.808877  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:10.093108  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:10.115279  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:10.171797  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:10.302886  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:10.309316  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:10.599277  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:10.679295  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:10.801311  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:10.810753  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:11.092578  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:11.169659  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:11.301865  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:11.309306  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:11.592349  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:11.669572  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:11.802183  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:11.809187  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:12.092017  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:12.170646  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:12.302621  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:12.309213  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:12.593318  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:12.614896  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:12.674684  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:12.802538  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:12.808493  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:13.091604  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:13.169351  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:13.302315  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:13.309741  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:13.646906  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:13.679599  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:13.804549  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:13.811875  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:14.093262  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:14.168542  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:14.312305  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:14.313064  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:14.598843  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:14.617140  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:14.683535  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:14.803091  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:14.811402  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:15.092510  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:15.174452  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:15.302172  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:15.309870  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:15.591304  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:15.670277  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:15.801216  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:15.809455  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:16.103938  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:16.169530  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:16.302373  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:16.311000  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:16.807806  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:16.808103  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:16.808269  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:16.811261  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:16.889999  338598 pod_ready.go:102] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"False"
	I0116 02:29:17.091905  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:17.169590  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:17.302129  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:17.309177  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:17.592568  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:17.668906  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:17.802696  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:17.808747  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:18.091450  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:18.169768  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:18.337024  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:18.340933  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:18.591970  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:18.612687  338598 pod_ready.go:92] pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:18.612715  338598 pod_ready.go:81] duration metric: took 39.507528635s waiting for pod "coredns-5dd5756b68-bbh2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.612725  338598 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-snm4f" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.615176  338598 pod_ready.go:97] error getting pod "coredns-5dd5756b68-snm4f" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-snm4f" not found
	I0116 02:29:18.615209  338598 pod_ready.go:81] duration metric: took 2.47672ms waiting for pod "coredns-5dd5756b68-snm4f" in "kube-system" namespace to be "Ready" ...
	E0116 02:29:18.615225  338598 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-snm4f" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-snm4f" not found
	I0116 02:29:18.615234  338598 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.620786  338598 pod_ready.go:92] pod "etcd-addons-133977" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:18.620812  338598 pod_ready.go:81] duration metric: took 5.570751ms waiting for pod "etcd-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.620822  338598 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.626380  338598 pod_ready.go:92] pod "kube-apiserver-addons-133977" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:18.626406  338598 pod_ready.go:81] duration metric: took 5.577793ms waiting for pod "kube-apiserver-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.626416  338598 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.633258  338598 pod_ready.go:92] pod "kube-controller-manager-addons-133977" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:18.633282  338598 pod_ready.go:81] duration metric: took 6.850454ms waiting for pod "kube-controller-manager-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.633292  338598 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hmst" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.668997  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:18.802419  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:18.808427  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:18.811446  338598 pod_ready.go:92] pod "kube-proxy-4hmst" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:18.811472  338598 pod_ready.go:81] duration metric: took 178.173472ms waiting for pod "kube-proxy-4hmst" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:18.811482  338598 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:19.091978  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:19.172606  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:19.210444  338598 pod_ready.go:92] pod "kube-scheduler-addons-133977" in "kube-system" namespace has status "Ready":"True"
	I0116 02:29:19.210478  338598 pod_ready.go:81] duration metric: took 398.987929ms waiting for pod "kube-scheduler-addons-133977" in "kube-system" namespace to be "Ready" ...
	I0116 02:29:19.210490  338598 pod_ready.go:38] duration metric: took 40.123594779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:29:19.210513  338598 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:29:19.210585  338598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:29:19.228635  338598 api_server.go:72] duration metric: took 40.641258946s to wait for apiserver process to appear ...
	I0116 02:29:19.228671  338598 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:29:19.228701  338598 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0116 02:29:19.236956  338598 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0116 02:29:19.238684  338598 api_server.go:141] control plane version: v1.28.4
	I0116 02:29:19.238727  338598 api_server.go:131] duration metric: took 10.046715ms to wait for apiserver health ...
	I0116 02:29:19.238740  338598 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:29:19.302157  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:19.310415  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:19.418486  338598 system_pods.go:59] 18 kube-system pods found
	I0116 02:29:19.418524  338598 system_pods.go:61] "coredns-5dd5756b68-bbh2g" [6a774e05-1ca3-4c84-a07f-7d9989ff5a80] Running
	I0116 02:29:19.418535  338598 system_pods.go:61] "csi-hostpath-attacher-0" [f56532dc-85e1-4230-9fa7-3333e8d8bba9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:29:19.418545  338598 system_pods.go:61] "csi-hostpath-resizer-0" [32f4cd8a-2b54-4da1-8b27-60cf4cd189f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:29:19.418557  338598 system_pods.go:61] "csi-hostpathplugin-kqnk6" [fdd45326-861d-44f0-9c94-1084273fc50e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:29:19.418566  338598 system_pods.go:61] "etcd-addons-133977" [f6ed1f19-6417-40fd-aa16-466891d34426] Running
	I0116 02:29:19.418574  338598 system_pods.go:61] "kube-apiserver-addons-133977" [4e8f3b37-856d-4d23-b3d3-5a537a667307] Running
	I0116 02:29:19.418585  338598 system_pods.go:61] "kube-controller-manager-addons-133977" [d063b142-8300-4f46-8770-d99b0c210cf9] Running
	I0116 02:29:19.418596  338598 system_pods.go:61] "kube-ingress-dns-minikube" [05752291-19a9-4b33-924d-6bf2ca4c3e5b] Running
	I0116 02:29:19.418606  338598 system_pods.go:61] "kube-proxy-4hmst" [724ed6f7-daa2-4eba-9c22-25870ca3e669] Running
	I0116 02:29:19.418616  338598 system_pods.go:61] "kube-scheduler-addons-133977" [0c86edaf-85bc-4b37-9ab4-d3eb25792448] Running
	I0116 02:29:19.418623  338598 system_pods.go:61] "metrics-server-7c66d45ddc-x4fzp" [43297a34-7c32-4122-9147-d2d3f776506e] Running
	I0116 02:29:19.418634  338598 system_pods.go:61] "nvidia-device-plugin-daemonset-bczv7" [19af4a51-522e-44b5-9d02-bb291a9d7704] Running
	I0116 02:29:19.418644  338598 system_pods.go:61] "registry-kbjdn" [b122adb8-2e2d-4dae-8d96-0dd52afdbf2c] Running
	I0116 02:29:19.418654  338598 system_pods.go:61] "registry-proxy-8pknn" [5a3c3adb-491a-48ad-8000-fa60b548f2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:29:19.418669  338598 system_pods.go:61] "snapshot-controller-58dbcc7b99-kjz4k" [e88fe834-3d6f-4b41-a1d6-101dc4571ff7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:29:19.418684  338598 system_pods.go:61] "snapshot-controller-58dbcc7b99-rh7gt" [bd5eca51-f743-4c33-a329-c7a72969d5e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:29:19.418696  338598 system_pods.go:61] "storage-provisioner" [d2a4af37-625c-4d2a-8754-47ff1ed97b98] Running
	I0116 02:29:19.418707  338598 system_pods.go:61] "tiller-deploy-7b677967b9-jl58n" [9d6d3cc6-9f52-4331-a5cc-860d379851ff] Running
	I0116 02:29:19.418716  338598 system_pods.go:74] duration metric: took 179.964166ms to wait for pod list to return data ...
	I0116 02:29:19.418731  338598 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:29:19.592511  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:19.609716  338598 default_sa.go:45] found service account: "default"
	I0116 02:29:19.609754  338598 default_sa.go:55] duration metric: took 191.011716ms for default service account to be created ...
	I0116 02:29:19.609768  338598 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:29:19.669574  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:19.802177  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:19.817317  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:19.822570  338598 system_pods.go:86] 18 kube-system pods found
	I0116 02:29:19.857680  338598 system_pods.go:89] "coredns-5dd5756b68-bbh2g" [6a774e05-1ca3-4c84-a07f-7d9989ff5a80] Running
	I0116 02:29:19.857699  338598 system_pods.go:89] "csi-hostpath-attacher-0" [f56532dc-85e1-4230-9fa7-3333e8d8bba9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:29:19.857706  338598 system_pods.go:89] "csi-hostpath-resizer-0" [32f4cd8a-2b54-4da1-8b27-60cf4cd189f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:29:19.857718  338598 system_pods.go:89] "csi-hostpathplugin-kqnk6" [fdd45326-861d-44f0-9c94-1084273fc50e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:29:19.857730  338598 system_pods.go:89] "etcd-addons-133977" [f6ed1f19-6417-40fd-aa16-466891d34426] Running
	I0116 02:29:19.857745  338598 system_pods.go:89] "kube-apiserver-addons-133977" [4e8f3b37-856d-4d23-b3d3-5a537a667307] Running
	I0116 02:29:19.857754  338598 system_pods.go:89] "kube-controller-manager-addons-133977" [d063b142-8300-4f46-8770-d99b0c210cf9] Running
	I0116 02:29:19.857764  338598 system_pods.go:89] "kube-ingress-dns-minikube" [05752291-19a9-4b33-924d-6bf2ca4c3e5b] Running
	I0116 02:29:19.857771  338598 system_pods.go:89] "kube-proxy-4hmst" [724ed6f7-daa2-4eba-9c22-25870ca3e669] Running
	I0116 02:29:19.857779  338598 system_pods.go:89] "kube-scheduler-addons-133977" [0c86edaf-85bc-4b37-9ab4-d3eb25792448] Running
	I0116 02:29:19.857793  338598 system_pods.go:89] "metrics-server-7c66d45ddc-x4fzp" [43297a34-7c32-4122-9147-d2d3f776506e] Running
	I0116 02:29:19.857823  338598 system_pods.go:89] "nvidia-device-plugin-daemonset-bczv7" [19af4a51-522e-44b5-9d02-bb291a9d7704] Running
	I0116 02:29:19.857835  338598 system_pods.go:89] "registry-kbjdn" [b122adb8-2e2d-4dae-8d96-0dd52afdbf2c] Running
	I0116 02:29:19.857851  338598 system_pods.go:89] "registry-proxy-8pknn" [5a3c3adb-491a-48ad-8000-fa60b548f2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:29:19.857867  338598 system_pods.go:89] "snapshot-controller-58dbcc7b99-kjz4k" [e88fe834-3d6f-4b41-a1d6-101dc4571ff7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:29:19.857888  338598 system_pods.go:89] "snapshot-controller-58dbcc7b99-rh7gt" [bd5eca51-f743-4c33-a329-c7a72969d5e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:29:19.857900  338598 system_pods.go:89] "storage-provisioner" [d2a4af37-625c-4d2a-8754-47ff1ed97b98] Running
	I0116 02:29:19.857914  338598 system_pods.go:89] "tiller-deploy-7b677967b9-jl58n" [9d6d3cc6-9f52-4331-a5cc-860d379851ff] Running
	I0116 02:29:19.857929  338598 system_pods.go:126] duration metric: took 248.151763ms to wait for k8s-apps to be running ...
	I0116 02:29:19.857947  338598 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:29:19.858068  338598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:29:19.877607  338598 system_svc.go:56] duration metric: took 19.644078ms WaitForService to wait for kubelet.
	I0116 02:29:19.877659  338598 kubeadm.go:581] duration metric: took 41.290285651s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:29:19.877699  338598 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:29:20.010656  338598 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:29:20.010696  338598 node_conditions.go:123] node cpu capacity is 2
	I0116 02:29:20.010711  338598 node_conditions.go:105] duration metric: took 133.006319ms to run NodePressure ...
	I0116 02:29:20.010727  338598 start.go:228] waiting for startup goroutines ...
	I0116 02:29:20.097828  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:20.169964  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:20.302532  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:20.310546  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:20.593335  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:20.669460  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:20.801479  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:20.809176  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:21.092849  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:21.173750  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:21.302904  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:21.308631  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:21.594379  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:21.670148  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:21.802416  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:21.809416  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:22.093708  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:22.170143  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:22.301807  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:22.308995  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:22.637977  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:22.669782  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:22.802902  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:22.808611  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:23.092334  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:23.170337  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:23.302175  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:23.308889  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:23.593243  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:23.670501  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:23.801064  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:23.808974  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:24.290395  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:24.297406  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:24.304636  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:24.312369  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:24.592743  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:24.669648  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:24.801649  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:24.809124  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:25.091420  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:25.169254  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:25.302071  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:25.309152  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:25.591184  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:25.670195  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:25.805078  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:25.811377  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:26.098899  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:26.170565  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:26.301801  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:26.308960  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:26.592008  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:26.670631  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:26.801673  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:26.808409  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:27.092064  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:27.169852  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:27.301481  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:27.308219  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:27.592812  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:27.669654  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:27.801767  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:27.809094  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:28.092734  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:28.168698  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:28.303229  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:28.310949  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:28.592014  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:28.671244  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:28.801539  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:28.808900  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:29.092499  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:29.169585  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:29.302838  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:29.311389  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:29.592446  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:29.669790  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:29.801725  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:29.810198  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:30.092218  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:30.170543  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:30.301340  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:30.308484  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:30.593196  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:30.669558  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:30.801575  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:30.809723  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:31.092753  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:31.169726  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:31.302569  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:31.310524  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:31.592835  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:31.670447  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:31.801027  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:31.809826  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:32.096292  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:32.170377  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:32.301390  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:32.312855  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:32.595257  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:32.682519  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:32.801040  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:32.812389  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:33.092901  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:33.169390  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:33.301855  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:33.309722  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:33.592822  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:33.669666  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:33.801882  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:33.809179  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:34.091919  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:34.170579  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:34.302217  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:34.347777  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:34.592716  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:34.669429  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:34.801364  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:34.808951  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:35.092116  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:35.170544  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:35.302240  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:35.309289  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:35.593357  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:35.670074  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:35.801356  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:35.815383  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:36.092098  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:36.170921  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:36.301749  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:36.309403  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:29:36.594878  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:36.744484  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:36.801622  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:36.808883  338598 kapi.go:107] duration metric: took 46.005309367s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:29:37.092556  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:37.169281  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:37.301500  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:37.591594  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:37.669142  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:37.801610  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:38.092958  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:38.170679  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:38.301696  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:38.593894  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:38.680359  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:38.809655  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:39.094227  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:39.172611  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:39.305023  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:39.592663  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:39.674470  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:39.803428  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:40.112982  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:40.173596  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:40.301750  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:40.592717  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:40.669311  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:40.801583  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:41.092276  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:41.175725  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:41.302032  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:41.604862  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:41.670015  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:41.801921  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:42.092634  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:42.169496  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:42.302346  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:42.591665  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:42.670631  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:42.804176  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:43.093415  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:43.170061  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:43.301879  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:43.593158  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:43.669382  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:43.801432  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:44.091843  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:44.170269  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:44.301306  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:44.592389  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:44.669498  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:44.802656  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:45.094920  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:45.169707  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:45.302170  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:45.592474  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:45.668887  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:29:45.806646  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:46.092460  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:46.170131  338598 kapi.go:107] duration metric: took 53.007232608s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:29:46.300784  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:46.594330  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:46.801128  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:47.092144  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:47.303475  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:47.592858  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:47.801504  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:48.092956  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:48.302059  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:48.591724  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:48.801872  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:49.092874  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:49.302663  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:49.592808  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:49.801735  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:50.092172  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:50.303026  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:50.591991  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:50.802140  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:51.091906  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:51.301755  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:51.592748  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:51.801837  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:52.092963  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:52.302256  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:52.593347  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:52.802351  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:53.094167  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:53.302864  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:53.592691  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:53.802524  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:54.091664  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:54.301397  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:54.591950  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:54.802365  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:55.092408  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:55.302326  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:55.591750  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:55.801892  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:56.092857  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:56.302732  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:56.594219  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:56.803423  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:57.091487  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:57.300906  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:57.592846  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:57.802431  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:58.092328  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:58.300896  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:58.595337  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:58.802335  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:59.092127  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:59.302922  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:29:59.592627  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:29:59.801013  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:00.092358  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:00.302212  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:00.596907  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:00.801934  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:01.092360  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:01.307981  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:01.593837  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:01.801356  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:02.095968  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:02.358050  338598 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:30:02.592538  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:02.800995  338598 kapi.go:107] duration metric: took 1m12.00468866s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:30:03.092931  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:03.592780  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:04.092696  338598 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:30:04.592562  338598 kapi.go:107] duration metric: took 1m9.004993197s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:30:04.594335  338598 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-133977 cluster.
	I0116 02:30:04.595793  338598 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:30:04.597246  338598 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:30:04.598828  338598 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 02:30:04.600280  338598 addons.go:505] enable addons completed in 1m26.519846238s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass nvidia-device-plugin storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 02:30:04.600328  338598 start.go:233] waiting for cluster config update ...
	I0116 02:30:04.600352  338598 start.go:242] writing updated cluster config ...
	I0116 02:30:04.600612  338598 ssh_runner.go:195] Run: rm -f paused
	I0116 02:30:04.658418  338598 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:30:04.660377  338598 out.go:177] * Done! kubectl is now configured to use "addons-133977" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	a5ed1b7581225       a416a98b71e22       Less than a second ago   Exited              helper-pod                               0                   9c9eac63e6865       helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219
	4744173521ad7       98f6c3b32d565       2 seconds ago            Exited              helm-test                                0                   9a9fcd590e9b4       helm-test
	9ec7f01b2566a       beae173ccac6a       3 seconds ago            Exited              registry-test                            0                   1d8383c2ba103       registry-test
	3c8e6399f5704       a8758716bb6aa       4 seconds ago            Running             task-pv-container                        0                   7ec854adea8f5       task-pv-pod
	6fe5c2b5277ff       3cb09943f099d       14 seconds ago           Running             headlamp                                 0                   6cd6e4ed70d0f       headlamp-7ddfbb94ff-rgkqx
	e6b519d891ea9       6d2a98b274382       22 seconds ago           Running             gcp-auth                                 0                   b638206e7af3f       gcp-auth-d4c87556c-dnxlg
	a7ba01c2bad32       d378d53ef198d       24 seconds ago           Exited              gadget                                   2                   a16a1605c6ed7       gadget-lnl4p
	4de407fdf7cd8       311f90a3747fd       24 seconds ago           Running             controller                               0                   97d6dab1a9a81       ingress-nginx-controller-69cff4fd79-6znk6
	955484134a659       738351fd438f0       39 seconds ago           Running             csi-snapshotter                          0                   8c83181695a77       csi-hostpathplugin-kqnk6
	1d681d580c35f       931dbfd16f87c       41 seconds ago           Running             csi-provisioner                          0                   8c83181695a77       csi-hostpathplugin-kqnk6
	814f9daed3f31       e899260153aed       42 seconds ago           Running             liveness-probe                           0                   8c83181695a77       csi-hostpathplugin-kqnk6
	16403efb281aa       e255e073c508c       43 seconds ago           Running             hostpath                                 0                   8c83181695a77       csi-hostpathplugin-kqnk6
	558782553a812       88ef14a257f42       45 seconds ago           Running             node-driver-registrar                    0                   8c83181695a77       csi-hostpathplugin-kqnk6
	25caf5d922a09       1ebff0f9671bc       46 seconds ago           Exited              patch                                    1                   2752430b846b4       gcp-auth-certs-patch-qnl54
	4b3ebd8e75690       1ebff0f9671bc       47 seconds ago           Exited              create                                   0                   2536a59373d3e       gcp-auth-certs-create-w4hjk
	1da35367b0b49       1ebff0f9671bc       48 seconds ago           Exited              patch                                    0                   fba3eb0a0aff9       ingress-nginx-admission-patch-x5s8b
	041eecb4a37a1       1ebff0f9671bc       48 seconds ago           Exited              create                                   0                   14fc69b173032       ingress-nginx-admission-create-shmh6
	b82b85e386977       d2fd211e7dcaa       50 seconds ago           Running             registry-proxy                           0                   0e369820f2d29       registry-proxy-8pknn
	16f548ecef31c       aa61ee9c70bc4       53 seconds ago           Running             volume-snapshot-controller               0                   ec838891221e2       snapshot-controller-58dbcc7b99-rh7gt
	a89d81c7ab507       e16d1e3a10667       53 seconds ago           Running             local-path-provisioner                   0                   86c2d4770829f       local-path-provisioner-78b46b4d5c-cbvcl
	8a684c9600d9a       a1ed5895ba635       55 seconds ago           Running             csi-external-health-monitor-controller   0                   8c83181695a77       csi-hostpathplugin-kqnk6
	08720f95801be       aa61ee9c70bc4       57 seconds ago           Running             volume-snapshot-controller               0                   07e0ca28c19fa       snapshot-controller-58dbcc7b99-kjz4k
	2fcfba9cacb97       19a639eda60f0       58 seconds ago           Running             csi-resizer                              0                   01ffc69bd1155       csi-hostpath-resizer-0
	d73e1a3f9c726       59cbb42146a37       59 seconds ago           Running             csi-attacher                             0                   2f1c322359026       csi-hostpath-attacher-0
	bb7be12511565       31de47c733c91       About a minute ago       Running             yakd                                     0                   905293acf257d       yakd-dashboard-9947fc6bf-qz226
	6afbf073157f7       3f39089e90831       About a minute ago       Running             tiller                                   0                   6014cfefac6de       tiller-deploy-7b677967b9-jl58n
	700d89260f1f5       a608c686bac93       About a minute ago       Running             metrics-server                           0                   2b380e35cacbe       metrics-server-7c66d45ddc-x4fzp
	63f9d14fc3ad0       909c3ff012b7f       About a minute ago       Running             registry                                 0                   5ac0a6196024c       registry-kbjdn
	f62213dc4c9d0       1499ed4fbd0aa       About a minute ago       Running             minikube-ingress-dns                     0                   5fc9384ced5ba       kube-ingress-dns-minikube
	e336aeb71c83a       6e38f40d628db       About a minute ago       Running             storage-provisioner                      0                   07b33b3e88477       storage-provisioner
	90bda3e089068       ead0a4a53df89       About a minute ago       Running             coredns                                  0                   5b005e779abfb       coredns-5dd5756b68-bbh2g
	fad18290a2fb3       83f6cc407eed8       About a minute ago       Running             kube-proxy                               0                   c3264d0d33a8e       kube-proxy-4hmst
	4c6ed9edd39f9       73deb9a3f7025       2 minutes ago            Running             etcd                                     0                   e5b21b603726d       etcd-addons-133977
	e8ab421fe893e       e3db313c6dbc0       2 minutes ago            Running             kube-scheduler                           0                   e0ebea1903dd4       kube-scheduler-addons-133977
	5e2f6181a70a3       d058aa5ab969c       2 minutes ago            Running             kube-controller-manager                  0                   16dc3a0500a67       kube-controller-manager-addons-133977
	21d8aaa786930       7fe0e6f37db33       2 minutes ago            Running             kube-apiserver                           0                   9ea5b3d971105       kube-apiserver-addons-133977
	
	
	==> containerd <==
	-- Journal begins at Tue 2024-01-16 02:27:48 UTC, ends at Tue 2024-01-16 02:30:25 UTC. --
	Jan 16 02:30:24 addons-133977 containerd[687]: time="2024-01-16T02:30:24.632698203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219,Uid:100c3251-facf-4c3c-ab01-69ae24ed5739,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"9c9eac63e6865c0abef24371211ad99c5151133463c2e19424f42918ca6b07dd\""
	Jan 16 02:30:24 addons-133977 containerd[687]: time="2024-01-16T02:30:24.636251701Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Jan 16 02:30:24 addons-133977 containerd[687]: time="2024-01-16T02:30:24.642169243Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 16 02:30:24 addons-133977 containerd[687]: time="2024-01-16T02:30:24.715959738Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.163404975Z" level=info msg="ImageCreate event name:\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.166089393Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=2233027"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.168496213Z" level=info msg="ImageCreate event name:\"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.171194847Z" level=info msg="Finish port forwarding for \"6014cfefac6de6514377acf4a02e87b80df036c5cbbaa41cc240e9036eb363bc\" port 44134"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.173429190Z" level=info msg="ImageUpdate event name:\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.177236934Z" level=info msg="Pulled image \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" with image id \"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\", repo tag \"\", repo digest \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\", size \"2224229\" in 540.768589ms"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.177309741Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" returns image reference \"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\""
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.185407379Z" level=info msg="CreateContainer within sandbox \"9c9eac63e6865c0abef24371211ad99c5151133463c2e19424f42918ca6b07dd\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.234029851Z" level=info msg="CreateContainer within sandbox \"9c9eac63e6865c0abef24371211ad99c5151133463c2e19424f42918ca6b07dd\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"a5ed1b7581225090dd6e12ec16f125be2a10fc265eded8a065a3ab1fcecd2d79\""
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.236062895Z" level=info msg="StartContainer for \"a5ed1b7581225090dd6e12ec16f125be2a10fc265eded8a065a3ab1fcecd2d79\""
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.376281922Z" level=info msg="StopPodSandbox for \"9a9fcd590e9b414208447a0b27e7910c59dec77a099bf7ba1e6c74e1dd7abc9e\""
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.376615365Z" level=info msg="Container to stop \"4744173521ad7687144254eea07ff8aff4223eee041b7cec0b7bda79fc00a1d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.400698435Z" level=info msg="StartContainer for \"a5ed1b7581225090dd6e12ec16f125be2a10fc265eded8a065a3ab1fcecd2d79\" returns successfully"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.506103121Z" level=info msg="shim disconnected" id=a5ed1b7581225090dd6e12ec16f125be2a10fc265eded8a065a3ab1fcecd2d79 namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.506809865Z" level=warning msg="cleaning up after shim disconnected" id=a5ed1b7581225090dd6e12ec16f125be2a10fc265eded8a065a3ab1fcecd2d79 namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.506997546Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.506524073Z" level=info msg="shim disconnected" id=9a9fcd590e9b414208447a0b27e7910c59dec77a099bf7ba1e6c74e1dd7abc9e namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.508002542Z" level=warning msg="cleaning up after shim disconnected" id=9a9fcd590e9b414208447a0b27e7910c59dec77a099bf7ba1e6c74e1dd7abc9e namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.508041175Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.642732828Z" level=info msg="TearDown network for sandbox \"9a9fcd590e9b414208447a0b27e7910c59dec77a099bf7ba1e6c74e1dd7abc9e\" successfully"
	Jan 16 02:30:25 addons-133977 containerd[687]: time="2024-01-16T02:30:25.643220954Z" level=info msg="StopPodSandbox for \"9a9fcd590e9b414208447a0b27e7910c59dec77a099bf7ba1e6c74e1dd7abc9e\" returns successfully"
	
	
	==> coredns [90bda3e0890684c9575b910a7d6f1683c0d8d0e8a7b878080b41fd2e0fbbf838] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 10.244.0.22:37231 - 41038 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000392444s
	[INFO] 10.244.0.22:46445 - 3783 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158598s
	[INFO] 10.244.0.22:36995 - 65350 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187187s
	[INFO] 10.244.0.22:48458 - 20321 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099048s
	[INFO] 10.244.0.22:55410 - 42112 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096472s
	[INFO] 10.244.0.22:47965 - 62236 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122684s
	[INFO] 10.244.0.22:53715 - 29059 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000795435s
	[INFO] 10.244.0.22:53479 - 40069 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001307719s
	[INFO] 10.244.0.25:38511 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000492896s
	[INFO] 10.244.0.25:49822 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160153s
	
	
	==> describe nodes <==
	Name:               addons-133977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-133977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=addons-133977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_28_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-133977
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-133977"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-133977
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:30:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:29:57 +0000   Tue, 16 Jan 2024 02:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:29:57 +0000   Tue, 16 Jan 2024 02:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:29:57 +0000   Tue, 16 Jan 2024 02:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:29:57 +0000   Tue, 16 Jan 2024 02:28:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    addons-133977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 752ce1d0d3d0447aaa40c4fa097eca20
	  System UUID:                752ce1d0-d3d0-447a-aa40-c4fa097eca20
	  Boot ID:                    ac32c07a-7dca-4a9e-88b7-c956678d9a9b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  gadget                      gadget-lnl4p                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  gcp-auth                    gcp-auth-d4c87556c-dnxlg                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  headlamp                    headlamp-7ddfbb94ff-rgkqx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-6znk6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         95s
	  kube-system                 coredns-5dd5756b68-bbh2g                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     108s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpathplugin-kqnk6                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 etcd-addons-133977                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-addons-133977                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-addons-133977                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-4hmst                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-addons-133977                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 metrics-server-7c66d45ddc-x4fzp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 registry-kbjdn                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 registry-proxy-8pknn                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 snapshot-controller-58dbcc7b99-kjz4k                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 snapshot-controller-58dbcc7b99-rh7gt                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 tiller-deploy-7b677967b9-jl58n                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  local-path-storage          helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  local-path-storage          local-path-provisioner-78b46b4d5c-cbvcl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qz226                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-133977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-133977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node addons-133977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node addons-133977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node addons-133977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node addons-133977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m                   kubelet          Node addons-133977 status is now: NodeReady
	  Normal  RegisteredNode           108s                 node-controller  Node addons-133977 event: Registered Node addons-133977 in Controller
	
	
	==> dmesg <==
	[  +0.092875] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.524459] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.793320] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159820] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.042705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 02:28] systemd-fstab-generator[555]: Ignoring "noauto" for root device
	[  +0.121786] systemd-fstab-generator[566]: Ignoring "noauto" for root device
	[  +0.156366] systemd-fstab-generator[579]: Ignoring "noauto" for root device
	[  +0.104089] systemd-fstab-generator[590]: Ignoring "noauto" for root device
	[  +0.255924] systemd-fstab-generator[617]: Ignoring "noauto" for root device
	[  +6.451423] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +4.432874] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +9.775192] systemd-fstab-generator[1210]: Ignoring "noauto" for root device
	[ +18.981820] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.096878] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.138193] kauditd_printk_skb: 52 callbacks suppressed
	[Jan16 02:29] kauditd_printk_skb: 12 callbacks suppressed
	[ +32.722046] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.265406] kauditd_printk_skb: 38 callbacks suppressed
	[Jan16 02:30] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.513063] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.222224] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [4c6ed9edd39f989aa3af24fdca478f068e679f91ca161b1ed7769237931eff14] <==
	{"level":"warn","ts":"2024-01-16T02:29:16.796688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.491141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81782"}
	{"level":"info","ts":"2024-01-16T02:29:16.796716Z","caller":"traceutil/trace.go:171","msg":"trace[741184820] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:944; }","duration":"134.529367ms","start":"2024-01-16T02:29:16.66218Z","end":"2024-01-16T02:29:16.796709Z","steps":["trace[741184820] 'agreement among raft nodes before linearized reading'  (duration: 134.387113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:29:16.796887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.20051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-bbh2g\" ","response":"range_response_count:1 size:4741"}
	{"level":"info","ts":"2024-01-16T02:29:16.796907Z","caller":"traceutil/trace.go:171","msg":"trace[1730594859] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-bbh2g; range_end:; response_count:1; response_revision:944; }","duration":"192.225061ms","start":"2024-01-16T02:29:16.604677Z","end":"2024-01-16T02:29:16.796902Z","steps":["trace[1730594859] 'agreement among raft nodes before linearized reading'  (duration: 192.177873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:29:16.797002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.837785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-01-16T02:29:16.797015Z","caller":"traceutil/trace.go:171","msg":"trace[1556778504] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:944; }","duration":"206.852511ms","start":"2024-01-16T02:29:16.590159Z","end":"2024-01-16T02:29:16.797012Z","steps":["trace[1556778504] 'agreement among raft nodes before linearized reading'  (duration: 206.811604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:29:24.273892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:29:23.924006Z","time spent":"349.877041ms","remote":"127.0.0.1:36882","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-01-16T02:29:24.27944Z","caller":"traceutil/trace.go:171","msg":"trace[2038990584] linearizableReadLoop","detail":"{readStateIndex:996; appliedIndex:995; }","duration":"193.364127ms","start":"2024-01-16T02:29:24.086054Z","end":"2024-01-16T02:29:24.279418Z","steps":["trace[2038990584] 'read index received'  (duration: 188.480127ms)","trace[2038990584] 'applied index is now lower than readState.Index'  (duration: 4.883211ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:29:24.279741Z","caller":"traceutil/trace.go:171","msg":"trace[1457157356] transaction","detail":"{read_only:false; response_revision:969; number_of_response:1; }","duration":"308.858999ms","start":"2024-01-16T02:29:23.970867Z","end":"2024-01-16T02:29:24.279726Z","steps":["trace[1457157356] 'process raft request'  (duration: 308.360709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:29:24.281611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:29:23.97085Z","time spent":"310.651293ms","remote":"127.0.0.1:36934","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cfpv44ajpouj4sywmpmrrt5oxq\" mod_revision:931 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cfpv44ajpouj4sywmpmrrt5oxq\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cfpv44ajpouj4sywmpmrrt5oxq\" > >"}
	{"level":"warn","ts":"2024-01-16T02:29:24.280983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.124726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81674"}
	{"level":"info","ts":"2024-01-16T02:29:24.281883Z","caller":"traceutil/trace.go:171","msg":"trace[350456923] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:970; }","duration":"119.080845ms","start":"2024-01-16T02:29:24.162784Z","end":"2024-01-16T02:29:24.281865Z","steps":["trace[350456923] 'agreement among raft nodes before linearized reading'  (duration: 117.929365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:29:24.281448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.401038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-01-16T02:29:24.282418Z","caller":"traceutil/trace.go:171","msg":"trace[339266656] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:970; }","duration":"196.377325ms","start":"2024-01-16T02:29:24.086031Z","end":"2024-01-16T02:29:24.282408Z","steps":["trace[339266656] 'agreement among raft nodes before linearized reading'  (duration: 195.27316ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:29:36.730301Z","caller":"traceutil/trace.go:171","msg":"trace[242591842] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"119.173088ms","start":"2024-01-16T02:29:36.611093Z","end":"2024-01-16T02:29:36.730266Z","steps":["trace[242591842] 'process raft request'  (duration: 118.960097ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:29:36.736967Z","caller":"traceutil/trace.go:171","msg":"trace[1340528457] transaction","detail":"{read_only:false; response_revision:1029; number_of_response:1; }","duration":"125.455981ms","start":"2024-01-16T02:29:36.611497Z","end":"2024-01-16T02:29:36.736953Z","steps":["trace[1340528457] 'process raft request'  (duration: 125.06366ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:30:02.341616Z","caller":"traceutil/trace.go:171","msg":"trace[967031885] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"229.593639ms","start":"2024-01-16T02:30:02.11201Z","end":"2024-01-16T02:30:02.341604Z","steps":["trace[967031885] 'process raft request'  (duration: 228.885777ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:30:10.833301Z","caller":"traceutil/trace.go:171","msg":"trace[2039484942] linearizableReadLoop","detail":"{readStateIndex:1276; appliedIndex:1275; }","duration":"160.244415ms","start":"2024-01-16T02:30:10.67303Z","end":"2024-01-16T02:30:10.833275Z","steps":["trace[2039484942] 'read index received'  (duration: 159.685464ms)","trace[2039484942] 'applied index is now lower than readState.Index'  (duration: 558.284µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:30:10.833975Z","caller":"traceutil/trace.go:171","msg":"trace[1267768218] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"199.720058ms","start":"2024-01-16T02:30:10.634104Z","end":"2024-01-16T02:30:10.833824Z","steps":["trace[1267768218] 'process raft request'  (duration: 198.792212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:30:10.835693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.061055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82423"}
	{"level":"info","ts":"2024-01-16T02:30:10.836447Z","caller":"traceutil/trace.go:171","msg":"trace[942697260] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1240; }","duration":"162.812265ms","start":"2024-01-16T02:30:10.673615Z","end":"2024-01-16T02:30:10.836427Z","steps":["trace[942697260] 'agreement among raft nodes before linearized reading'  (duration: 161.937856ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:30:10.834695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.424564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4124"}
	{"level":"info","ts":"2024-01-16T02:30:10.837137Z","caller":"traceutil/trace.go:171","msg":"trace[1459492816] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1240; }","duration":"164.111546ms","start":"2024-01-16T02:30:10.673005Z","end":"2024-01-16T02:30:10.837117Z","steps":["trace[1459492816] 'agreement among raft nodes before linearized reading'  (duration: 161.254038ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:30:10.83627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.024964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82423"}
	{"level":"info","ts":"2024-01-16T02:30:10.837844Z","caller":"traceutil/trace.go:171","msg":"trace[1271873203] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1240; }","duration":"146.603039ms","start":"2024-01-16T02:30:10.691229Z","end":"2024-01-16T02:30:10.837832Z","steps":["trace[1271873203] 'agreement among raft nodes before linearized reading'  (duration: 144.735621ms)"],"step_count":1}
	
	
	==> gcp-auth [e6b519d891ea90c27df2482ccc5768800ff77072ccf917e0fe4ddeed89b52bc3] <==
	2024/01/16 02:30:03 GCP Auth Webhook started!
	2024/01/16 02:30:06 Ready to marshal response ...
	2024/01/16 02:30:06 Ready to write response ...
	2024/01/16 02:30:06 Ready to marshal response ...
	2024/01/16 02:30:06 Ready to write response ...
	2024/01/16 02:30:06 http: TLS handshake error from 10.244.0.1:25674: EOF
	2024/01/16 02:30:06 Ready to marshal response ...
	2024/01/16 02:30:06 Ready to write response ...
	2024/01/16 02:30:15 Ready to marshal response ...
	2024/01/16 02:30:15 Ready to write response ...
	2024/01/16 02:30:17 Ready to marshal response ...
	2024/01/16 02:30:17 Ready to write response ...
	2024/01/16 02:30:17 Ready to marshal response ...
	2024/01/16 02:30:17 Ready to write response ...
	2024/01/16 02:30:23 Ready to marshal response ...
	2024/01/16 02:30:23 Ready to write response ...
	2024/01/16 02:30:23 Ready to marshal response ...
	2024/01/16 02:30:23 Ready to write response ...
	
	
	==> kernel <==
	 02:30:26 up 2 min,  0 users,  load average: 3.78, 1.97, 0.77
	Linux addons-133977 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [21d8aaa786930793df0d28382f1ef22b448769b5d8e93632d9a5f2c2b849ea03] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:28:48.480504       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 02:28:48.766796       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0116 02:28:50.384862       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.96.62.93"}
	I0116 02:28:50.423258       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.222.94"}
	I0116 02:28:50.523524       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0116 02:28:51.921143       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:28:52.758682       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.17.189"}
	I0116 02:28:52.792177       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0116 02:28:53.001898       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.34.105"}
	W0116 02:28:54.142828       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:28:55.315389       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.246.116"}
	W0116 02:29:14.483793       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 02:29:14.483881       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:29:14.484155       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 02:29:14.484521       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.227.8:443: connect: connection refused
	E0116 02:29:14.486167       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.227.8:443: connect: connection refused
	E0116 02:29:14.492079       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.227.8:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.227.8:443: connect: connection refused
	I0116 02:29:14.611909       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 02:29:21.993184       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 02:30:06.129434       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.143.68"}
	I0116 02:30:22.000283       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [5e2f6181a70a331a0e57a069e148fe5145e59a44f23b47e3b5528f786f835507] <==
	I0116 02:30:04.294788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="195.731µs"
	I0116 02:30:06.180654       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-7ddfbb94ff to 1"
	I0116 02:30:06.194773       1 event.go:307] "Event occurred" object="headlamp/headlamp-7ddfbb94ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-7ddfbb94ff-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0116 02:30:06.210726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="30.883551ms"
	E0116 02:30:06.210780       1 replica_set.go:557] sync "headlamp/headlamp-7ddfbb94ff" failed with pods "headlamp-7ddfbb94ff-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0116 02:30:06.251650       1 event.go:307] "Event occurred" object="headlamp/headlamp-7ddfbb94ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-7ddfbb94ff-rgkqx"
	I0116 02:30:06.260871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="50.041568ms"
	I0116 02:30:06.291914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="30.949803ms"
	I0116 02:30:06.310087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="18.078461ms"
	I0116 02:30:06.310190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="51.54µs"
	I0116 02:30:11.034633       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 02:30:11.044482       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 02:30:11.076824       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:30:11.076887       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:30:11.136175       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 02:30:11.140153       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 02:30:11.231015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="46.958µs"
	I0116 02:30:12.269796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="11.71962ms"
	I0116 02:30:12.270989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="98.856µs"
	I0116 02:30:14.272631       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:30:14.765002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="21.105609ms"
	I0116 02:30:14.766872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="61.363µs"
	I0116 02:30:22.911419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="89.426µs"
	I0116 02:30:23.149883       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0116 02:30:23.299145       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [fad18290a2fb3ffdc2ec8cd0cf2aede88aecc8cf1f8f4bd2d6887ab5a31b9faf] <==
	I0116 02:28:39.522221       1 server_others.go:69] "Using iptables proxy"
	I0116 02:28:39.548519       1 node.go:141] Successfully retrieved node IP: 192.168.39.10
	I0116 02:28:39.793188       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:28:39.799255       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:28:39.855412       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:28:39.855458       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:28:39.855665       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:28:39.855675       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:28:39.869277       1 config.go:188] "Starting service config controller"
	I0116 02:28:39.869291       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:28:39.869305       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:28:39.869309       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:28:39.885805       1 config.go:315] "Starting node config controller"
	I0116 02:28:39.885818       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:28:39.974025       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:28:39.974082       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:28:39.986019       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e8ab421fe893e0386476390c2fab37b7caf092084f63f9637a8174acb9b3aac4] <==
	W0116 02:28:22.973834       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:28:22.973893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:28:23.027053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:28:23.027132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:28:23.032592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:28:23.032644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:28:23.077888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:28:23.078265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:28:23.104601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:28:23.104726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:28:23.172878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:28:23.172932       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:28:23.188441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:28:23.188741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:28:23.255489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:28:23.255515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:28:23.259862       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:28:23.260018       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 02:28:23.284237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:28:23.284470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 02:28:23.395728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:28:23.395869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:28:23.495468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:28:23.495614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0116 02:28:25.028222       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:27:48 UTC, ends at Tue 2024-01-16 02:30:26 UTC. --
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.441222    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/100c3251-facf-4c3c-ab01-69ae24ed5739-gcp-creds\") pod \"helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219\" (UID: \"100c3251-facf-4c3c-ab01-69ae24ed5739\") " pod="local-path-storage/helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219"
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.642391    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbgt\" (UniqueName: \"kubernetes.io/projected/2e8bfd68-0cfb-4bb3-b192-97355ab7aa52-kube-api-access-9wbgt\") pod \"2e8bfd68-0cfb-4bb3-b192-97355ab7aa52\" (UID: \"2e8bfd68-0cfb-4bb3-b192-97355ab7aa52\") "
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.653509    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8bfd68-0cfb-4bb3-b192-97355ab7aa52-kube-api-access-9wbgt" (OuterVolumeSpecName: "kube-api-access-9wbgt") pod "2e8bfd68-0cfb-4bb3-b192-97355ab7aa52" (UID: "2e8bfd68-0cfb-4bb3-b192-97355ab7aa52"). InnerVolumeSpecName "kube-api-access-9wbgt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.743046    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59386d89-88bd-45a8-a10b-dac8607fd63d-gcp-creds\") pod \"59386d89-88bd-45a8-a10b-dac8607fd63d\" (UID: \"59386d89-88bd-45a8-a10b-dac8607fd63d\") "
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.743367    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8nlw\" (UniqueName: \"kubernetes.io/projected/59386d89-88bd-45a8-a10b-dac8607fd63d-kube-api-access-k8nlw\") pod \"59386d89-88bd-45a8-a10b-dac8607fd63d\" (UID: \"59386d89-88bd-45a8-a10b-dac8607fd63d\") "
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.743624    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9wbgt\" (UniqueName: \"kubernetes.io/projected/2e8bfd68-0cfb-4bb3-b192-97355ab7aa52-kube-api-access-9wbgt\") on node \"addons-133977\" DevicePath \"\""
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.744428    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59386d89-88bd-45a8-a10b-dac8607fd63d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "59386d89-88bd-45a8-a10b-dac8607fd63d" (UID: "59386d89-88bd-45a8-a10b-dac8607fd63d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.755126    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59386d89-88bd-45a8-a10b-dac8607fd63d-kube-api-access-k8nlw" (OuterVolumeSpecName: "kube-api-access-k8nlw") pod "59386d89-88bd-45a8-a10b-dac8607fd63d" (UID: "59386d89-88bd-45a8-a10b-dac8607fd63d"). InnerVolumeSpecName "kube-api-access-k8nlw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.844029    1217 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59386d89-88bd-45a8-a10b-dac8607fd63d-gcp-creds\") on node \"addons-133977\" DevicePath \"\""
	Jan 16 02:30:23 addons-133977 kubelet[1217]: I0116 02:30:23.844077    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k8nlw\" (UniqueName: \"kubernetes.io/projected/59386d89-88bd-45a8-a10b-dac8607fd63d-kube-api-access-k8nlw\") on node \"addons-133977\" DevicePath \"\""
	Jan 16 02:30:24 addons-133977 kubelet[1217]: I0116 02:30:24.320480    1217 scope.go:117] "RemoveContainer" containerID="281fe56d7388726e5105b16fe5efc1c88246ef622ee1660fc46e93211acb225a"
	Jan 16 02:30:24 addons-133977 kubelet[1217]: I0116 02:30:24.355515    1217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8383c2ba103c4cce48cf445cc8c8741e6407ec90f558a097e5856b18e9f854"
	Jan 16 02:30:25 addons-133977 kubelet[1217]: I0116 02:30:25.714009    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2e8bfd68-0cfb-4bb3-b192-97355ab7aa52" path="/var/lib/kubelet/pods/2e8bfd68-0cfb-4bb3-b192-97355ab7aa52/volumes"
	Jan 16 02:30:25 addons-133977 kubelet[1217]: I0116 02:30:25.715016    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="59386d89-88bd-45a8-a10b-dac8607fd63d" path="/var/lib/kubelet/pods/59386d89-88bd-45a8-a10b-dac8607fd63d/volumes"
	Jan 16 02:30:25 addons-133977 kubelet[1217]: E0116 02:30:25.758050    1217 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:30:25 addons-133977 kubelet[1217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:30:25 addons-133977 kubelet[1217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:30:25 addons-133977 kubelet[1217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:30:25 addons-133977 kubelet[1217]: I0116 02:30:25.763514    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmkr\" (UniqueName: \"kubernetes.io/projected/8691d4ba-1688-4b65-80a5-f320e460b0dd-kube-api-access-2vmkr\") pod \"8691d4ba-1688-4b65-80a5-f320e460b0dd\" (UID: \"8691d4ba-1688-4b65-80a5-f320e460b0dd\") "
	Jan 16 02:30:25 addons-133977 kubelet[1217]: I0116 02:30:25.775491    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8691d4ba-1688-4b65-80a5-f320e460b0dd-kube-api-access-2vmkr" (OuterVolumeSpecName: "kube-api-access-2vmkr") pod "8691d4ba-1688-4b65-80a5-f320e460b0dd" (UID: "8691d4ba-1688-4b65-80a5-f320e460b0dd"). InnerVolumeSpecName "kube-api-access-2vmkr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:30:25 addons-133977 kubelet[1217]: I0116 02:30:25.864724    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2vmkr\" (UniqueName: \"kubernetes.io/projected/8691d4ba-1688-4b65-80a5-f320e460b0dd-kube-api-access-2vmkr\") on node \"addons-133977\" DevicePath \"\""
	Jan 16 02:30:26 addons-133977 kubelet[1217]: I0116 02:30:26.068881    1217 scope.go:117] "RemoveContainer" containerID="4744173521ad7687144254eea07ff8aff4223eee041b7cec0b7bda79fc00a1d0"
	Jan 16 02:30:26 addons-133977 kubelet[1217]: I0116 02:30:26.079091    1217 scope.go:117] "RemoveContainer" containerID="25caf5d922a092026d33ddd7936522048674408f24fca7aa15ea38166294ff02"
	Jan 16 02:30:26 addons-133977 kubelet[1217]: I0116 02:30:26.089514    1217 scope.go:117] "RemoveContainer" containerID="4b3ebd8e75690b8c7771ca04a8a1bfc11de3fec9a44a2830e01d6fe6e0a1e67c"
	Jan 16 02:30:26 addons-133977 kubelet[1217]: I0116 02:30:26.100073    1217 scope.go:117] "RemoveContainer" containerID="9ec7f01b2566a708d8df1ab83bb0b08f80b8d4a23ae3787ab05f83bb55141abd"
	
	
	==> storage-provisioner [e336aeb71c83abd766a50ebfa30998dbecc6bf4f13579c9d2af4e33f822ee1d9] <==
	I0116 02:28:51.408991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:28:51.444170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:28:51.444245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:28:51.474727       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:28:51.476776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-133977_696fe1fb-a47b-4477-b57d-008c82e09df7!
	I0116 02:28:51.497829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91642070-5ce3-4e0e-b4a4-82d69d6e6b42", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-133977_696fe1fb-a47b-4477-b57d-008c82e09df7 became leader
	I0116 02:28:51.577369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-133977_696fe1fb-a47b-4477-b57d-008c82e09df7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-133977 -n addons-133977
helpers_test.go:261: (dbg) Run:  kubectl --context addons-133977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-shmh6 ingress-nginx-admission-patch-x5s8b tiller-deploy-7b677967b9-jl58n helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-133977 describe pod test-local-path ingress-nginx-admission-create-shmh6 ingress-nginx-admission-patch-x5s8b tiller-deploy-7b677967b9-jl58n helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-133977 describe pod test-local-path ingress-nginx-admission-create-shmh6 ingress-nginx-admission-patch-x5s8b tiller-deploy-7b677967b9-jl58n helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219: exit status 1 (113.284028ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zsgwl (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zsgwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-shmh6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x5s8b" not found
	Error from server (NotFound): pods "tiller-deploy-7b677967b9-jl58n" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-133977 describe pod test-local-path ingress-nginx-admission-create-shmh6 ingress-nginx-admission-patch-x5s8b tiller-deploy-7b677967b9-jl58n helper-pod-create-pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219: exit status 1
--- FAIL: TestAddons/parallel/Registry (22.93s)

                                                
                                    

Test pass (278/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.48
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.16
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 4.89
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 7.46
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.61
31 TestOffline 120.14
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 149.9
39 TestAddons/parallel/Ingress 21.86
40 TestAddons/parallel/InspektorGadget 12.19
41 TestAddons/parallel/MetricsServer 7.12
42 TestAddons/parallel/HelmTiller 15.16
44 TestAddons/parallel/CSI 37.55
45 TestAddons/parallel/Headlamp 12.58
46 TestAddons/parallel/CloudSpanner 5.78
47 TestAddons/parallel/LocalPath 55.45
48 TestAddons/parallel/NvidiaDevicePlugin 6.94
49 TestAddons/parallel/Yakd 6.18
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 92.53
54 TestCertOptions 65.06
55 TestCertExpiration 290.26
57 TestForceSystemdFlag 109.25
58 TestForceSystemdEnv 51.14
60 TestKVMDriverInstallOrUpdate 1.38
64 TestErrorSpam/setup 47.91
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.82
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.76
69 TestErrorSpam/stop 2.29
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.83
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.47
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.83
81 TestFunctional/serial/CacheCmd/cache/add_local 1.38
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 43.8
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.57
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 4.41
95 TestFunctional/parallel/ConfigCmd 0.48
96 TestFunctional/parallel/DashboardCmd 13.26
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.84
103 TestFunctional/parallel/ServiceCmdConnect 9.75
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 45.49
107 TestFunctional/parallel/SSHCmd 0.52
108 TestFunctional/parallel/CpCmd 1.68
109 TestFunctional/parallel/MySQL 28.33
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.61
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
119 TestFunctional/parallel/License 0.2
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.75
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
126 TestFunctional/parallel/ImageCommands/ImageBuild 4.57
127 TestFunctional/parallel/ImageCommands/Setup 1.08
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.93
129 TestFunctional/parallel/ServiceCmd/DeployApp 23.34
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.94
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.73
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.42
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.87
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.44
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
148 TestFunctional/parallel/ServiceCmd/List 0.48
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
151 TestFunctional/parallel/ServiceCmd/Format 0.31
152 TestFunctional/parallel/ServiceCmd/URL 0.34
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
154 TestFunctional/parallel/ProfileCmd/profile_list 0.3
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
156 TestFunctional/parallel/MountCmd/any-port 8.02
157 TestFunctional/parallel/MountCmd/specific-port 2
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
159 TestFunctional/delete_addon-resizer_images 0.08
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 78.77
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.19
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 42.84
172 TestJSONOutput/start/Command 64.86
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.69
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.67
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.24
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 102.82
204 TestMountStart/serial/StartWithMountFirst 29.62
205 TestMountStart/serial/VerifyMountFirst 0.43
206 TestMountStart/serial/StartWithMountSecond 29.59
207 TestMountStart/serial/VerifyMountSecond 0.42
208 TestMountStart/serial/DeleteFirst 0.91
209 TestMountStart/serial/VerifyMountPostDelete 0.43
210 TestMountStart/serial/Stop 1.22
211 TestMountStart/serial/RestartStopped 22.59
212 TestMountStart/serial/VerifyMountPostStop 0.44
215 TestMultiNode/serial/FreshStart2Nodes 115.36
216 TestMultiNode/serial/DeployApp2Nodes 6.45
217 TestMultiNode/serial/PingHostFrom2Pods 0.96
218 TestMultiNode/serial/AddNode 41.42
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.23
221 TestMultiNode/serial/CopyFile 8.08
222 TestMultiNode/serial/StopNode 2.27
223 TestMultiNode/serial/StartAfterStop 31.81
224 TestMultiNode/serial/RestartKeepsNodes 319.42
225 TestMultiNode/serial/DeleteNode 1.89
226 TestMultiNode/serial/StopMultiNode 183.34
227 TestMultiNode/serial/RestartMultiNode 96.93
228 TestMultiNode/serial/ValidateNameConflict 49.77
233 TestPreload 274.39
235 TestScheduledStopUnix 121.03
239 TestRunningBinaryUpgrade 182.11
241 TestKubernetesUpgrade 239.61
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
245 TestNoKubernetes/serial/StartWithK8s 128.71
246 TestNoKubernetes/serial/StartWithStopK8s 51.96
254 TestNetworkPlugins/group/false 3.82
258 TestNoKubernetes/serial/Start 28.68
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
260 TestNoKubernetes/serial/ProfileList 0.77
261 TestNoKubernetes/serial/Stop 1.31
262 TestNoKubernetes/serial/StartNoArgs 73.37
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
264 TestStoppedBinaryUpgrade/Setup 0.4
265 TestStoppedBinaryUpgrade/Upgrade 159.97
267 TestPause/serial/Start 129.59
275 TestNetworkPlugins/group/auto/Start 102.01
276 TestPause/serial/SecondStartNoReconfiguration 7.59
277 TestNetworkPlugins/group/flannel/Start 82.71
278 TestPause/serial/Pause 0.8
279 TestPause/serial/VerifyStatus 0.3
280 TestPause/serial/Unpause 0.75
281 TestPause/serial/PauseAgain 0.87
282 TestPause/serial/DeletePaused 1.03
283 TestPause/serial/VerifyDeletedResources 14.37
284 TestNetworkPlugins/group/enable-default-cni/Start 109.32
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
286 TestNetworkPlugins/group/bridge/Start 97.59
287 TestNetworkPlugins/group/auto/KubeletFlags 0.26
288 TestNetworkPlugins/group/auto/NetCatPod 10.31
289 TestNetworkPlugins/group/auto/DNS 0.19
290 TestNetworkPlugins/group/auto/Localhost 0.15
291 TestNetworkPlugins/group/auto/HairPin 0.16
292 TestNetworkPlugins/group/calico/Start 107.43
293 TestNetworkPlugins/group/flannel/ControllerPod 6.01
294 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
295 TestNetworkPlugins/group/flannel/NetCatPod 10.32
296 TestNetworkPlugins/group/flannel/DNS 0.2
297 TestNetworkPlugins/group/flannel/Localhost 0.16
298 TestNetworkPlugins/group/flannel/HairPin 0.16
299 TestNetworkPlugins/group/kindnet/Start 80.55
300 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
301 TestNetworkPlugins/group/bridge/NetCatPod 10.32
302 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
303 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
304 TestNetworkPlugins/group/bridge/DNS 0.27
305 TestNetworkPlugins/group/bridge/Localhost 0.18
306 TestNetworkPlugins/group/bridge/HairPin 0.17
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
310 TestNetworkPlugins/group/custom-flannel/Start 95.15
312 TestStartStop/group/old-k8s-version/serial/FirstStart 176.69
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/calico/KubeletFlags 0.24
315 TestNetworkPlugins/group/calico/NetCatPod 11.35
316 TestNetworkPlugins/group/calico/DNS 0.22
317 TestNetworkPlugins/group/calico/Localhost 0.21
318 TestNetworkPlugins/group/calico/HairPin 0.24
319 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
320 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
321 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
323 TestStartStop/group/no-preload/serial/FirstStart 136.1
324 TestNetworkPlugins/group/kindnet/DNS 0.19
325 TestNetworkPlugins/group/kindnet/Localhost 0.17
326 TestNetworkPlugins/group/kindnet/HairPin 0.16
328 TestStartStop/group/embed-certs/serial/FirstStart 84.35
329 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
330 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
331 TestNetworkPlugins/group/custom-flannel/DNS 0.23
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.18
336 TestStartStop/group/embed-certs/serial/DeployApp 8.37
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.48
338 TestStartStop/group/embed-certs/serial/Stop 91.82
339 TestStartStop/group/old-k8s-version/serial/DeployApp 8.47
340 TestStartStop/group/no-preload/serial/DeployApp 8.35
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
342 TestStartStop/group/old-k8s-version/serial/Stop 92.1
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
344 TestStartStop/group/no-preload/serial/Stop 91.87
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.85
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/embed-certs/serial/SecondStart 580.29
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
351 TestStartStop/group/old-k8s-version/serial/SecondStart 102.29
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
353 TestStartStop/group/no-preload/serial/SecondStart 350.87
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.43
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 41.01
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
359 TestStartStop/group/old-k8s-version/serial/Pause 2.88
361 TestStartStop/group/newest-cni/serial/FirstStart 62.99
362 TestStartStop/group/newest-cni/serial/DeployApp 0
363 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.43
364 TestStartStop/group/newest-cni/serial/Stop 12.15
365 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
366 TestStartStop/group/newest-cni/serial/SecondStart 51.59
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
370 TestStartStop/group/newest-cni/serial/Pause 2.89
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
374 TestStartStop/group/no-preload/serial/Pause 2.87
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
382 TestStartStop/group/embed-certs/serial/Pause 2.8
x
+
TestDownloadOnly/v1.16.0/json-events (10.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-378414 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-378414 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.481179829s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-378414
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-378414: exit status 85 (81.323934ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |          |
	|         | -p download-only-378414        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:27:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:27:09.712459  337886 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:27:09.712611  337886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:09.712622  337886 out.go:309] Setting ErrFile to fd 2...
	I0116 02:27:09.712627  337886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:09.712832  337886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	W0116 02:27:09.712963  337886 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17965-330687/.minikube/config/config.json: open /home/jenkins/minikube-integration/17965-330687/.minikube/config/config.json: no such file or directory
	I0116 02:27:09.713631  337886 out.go:303] Setting JSON to true
	I0116 02:27:09.714591  337886 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32982,"bootTime":1705339048,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:27:09.714663  337886 start.go:138] virtualization: kvm guest
	I0116 02:27:09.717209  337886 out.go:97] [download-only-378414] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:27:09.718793  337886 out.go:169] MINIKUBE_LOCATION=17965
	W0116 02:27:09.717371  337886 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 02:27:09.717449  337886 notify.go:220] Checking for updates...
	I0116 02:27:09.721966  337886 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:27:09.723567  337886 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:27:09.724978  337886 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:09.726548  337886 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:27:09.729257  337886 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:27:09.729650  337886 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:27:09.768250  337886 out.go:97] Using the kvm2 driver based on user configuration
	I0116 02:27:09.768284  337886 start.go:298] selected driver: kvm2
	I0116 02:27:09.768294  337886 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:27:09.768686  337886 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:09.768814  337886 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-330687/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:27:09.786211  337886 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:27:09.786318  337886 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:27:09.786905  337886 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 02:27:09.787051  337886 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:27:09.787134  337886 cni.go:84] Creating CNI manager for ""
	I0116 02:27:09.787149  337886 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 02:27:09.787159  337886 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:27:09.787166  337886 start_flags.go:321] config:
	{Name:download-only-378414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-378414 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:27:09.787400  337886 iso.go:125] acquiring lock: {Name:mk83fca54b69be1d8016cc7581ed959170948280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:09.789408  337886 out.go:97] Downloading VM boot image ...
	I0116 02:27:09.789445  337886 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17965-330687/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:27:12.440792  337886 out.go:97] Starting control plane node download-only-378414 in cluster download-only-378414
	I0116 02:27:12.440875  337886 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 02:27:12.462678  337886 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0116 02:27:12.462724  337886 cache.go:56] Caching tarball of preloaded images
	I0116 02:27:12.462903  337886 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 02:27:12.464946  337886 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 02:27:12.464986  337886 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:12.493852  337886 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0116 02:27:15.701504  337886 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:15.701614  337886 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:16.597566  337886 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0116 02:27:16.598055  337886 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/download-only-378414/config.json ...
	I0116 02:27:16.598117  337886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/download-only-378414/config.json: {Name:mk62257f1efe6c1e23078e7ae198be0b22416b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:27:16.598334  337886 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 02:27:16.598581  337886 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17965-330687/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-378414"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-378414
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-612167 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-612167 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.889806442s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-612167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-612167: exit status 85 (79.18574ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-378414        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-378414        | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | -o=json --download-only        | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-612167        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:27:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:27:20.592721  338049 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:27:20.592861  338049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:20.592874  338049 out.go:309] Setting ErrFile to fd 2...
	I0116 02:27:20.592880  338049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:20.593090  338049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:27:20.593743  338049 out.go:303] Setting JSON to true
	I0116 02:27:20.594721  338049 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32993,"bootTime":1705339048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:27:20.594805  338049 start.go:138] virtualization: kvm guest
	I0116 02:27:20.597256  338049 out.go:97] [download-only-612167] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:27:20.598867  338049 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:27:20.597489  338049 notify.go:220] Checking for updates...
	I0116 02:27:20.601729  338049 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:27:20.603308  338049 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:27:20.604754  338049 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:20.606075  338049 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-612167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-612167
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-990942 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-990942 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.45845433s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-990942
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-990942: exit status 85 (78.230697ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-378414           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-378414           | download-only-378414 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | -o=json --download-only           | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-612167           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| delete  | -p download-only-612167           | download-only-612167 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC | 16 Jan 24 02:27 UTC |
	| start   | -o=json --download-only           | download-only-990942 | jenkins | v1.32.0 | 16 Jan 24 02:27 UTC |                     |
	|         | -p download-only-990942           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:27:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:27:25.865322  338199 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:27:25.865451  338199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:25.865463  338199 out.go:309] Setting ErrFile to fd 2...
	I0116 02:27:25.865468  338199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:27:25.865672  338199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:27:25.866268  338199 out.go:303] Setting JSON to true
	I0116 02:27:25.867190  338199 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32998,"bootTime":1705339048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:27:25.867263  338199 start.go:138] virtualization: kvm guest
	I0116 02:27:25.869540  338199 out.go:97] [download-only-990942] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:27:25.871324  338199 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:27:25.869739  338199 notify.go:220] Checking for updates...
	I0116 02:27:25.874394  338199 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:27:25.876032  338199 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:27:25.877565  338199 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:27:25.879153  338199 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:27:25.881668  338199 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:27:25.881915  338199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:27:25.915465  338199 out.go:97] Using the kvm2 driver based on user configuration
	I0116 02:27:25.915506  338199 start.go:298] selected driver: kvm2
	I0116 02:27:25.915512  338199 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:27:25.915857  338199 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:25.915981  338199 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-330687/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:27:25.932174  338199 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:27:25.932248  338199 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:27:25.932756  338199 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 02:27:25.932908  338199 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:27:25.932981  338199 cni.go:84] Creating CNI manager for ""
	I0116 02:27:25.932995  338199 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 02:27:25.933005  338199 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:27:25.933014  338199 start_flags.go:321] config:
	{Name:download-only-990942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-990942 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:27:25.933169  338199 iso.go:125] acquiring lock: {Name:mk83fca54b69be1d8016cc7581ed959170948280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:27:25.935117  338199 out.go:97] Starting control plane node download-only-990942 in cluster download-only-990942
	I0116 02:27:25.935146  338199 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 02:27:25.963707  338199 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0116 02:27:25.963749  338199 cache.go:56] Caching tarball of preloaded images
	I0116 02:27:25.963962  338199 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 02:27:25.966078  338199 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 02:27:25.966100  338199 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:25.993633  338199 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0116 02:27:28.780595  338199 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:28.780702  338199 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-330687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 02:27:29.597904  338199 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0116 02:27:29.598295  338199 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/download-only-990942/config.json ...
	I0116 02:27:29.598333  338199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/download-only-990942/config.json: {Name:mka7d80ec67562c89f686f13e4ddec07aa219b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:27:29.598538  338199 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 02:27:29.598686  338199 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17965-330687/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-990942"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-990942
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-511930 --alsologtostderr --binary-mirror http://127.0.0.1:45737 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-511930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-511930
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (120.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-943665 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-943665 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m59.284043522s)
helpers_test.go:175: Cleaning up "offline-containerd-943665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-943665
--- PASS: TestOffline (120.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-133977
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-133977: exit status 85 (67.222705ms)

                                                
                                                
-- stdout --
	* Profile "addons-133977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-133977
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-133977: exit status 85 (69.023473ms)

                                                
                                                
-- stdout --
	* Profile "addons-133977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (149.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-133977 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-133977 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.903050633s)
--- PASS: TestAddons/Setup (149.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-133977 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-133977 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-133977 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dbf064a6-668d-4915-ab11-deca90061bde] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dbf064a6-668d-4915-ab11-deca90061bde] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004417571s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-133977 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.10
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable ingress-dns --alsologtostderr -v=1: (1.725162003s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable ingress --alsologtostderr -v=1: (8.565512314s)
--- PASS: TestAddons/parallel/Ingress (21.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lnl4p" [cbdde97d-c2d0-4927-85ab-129da5335a0e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00477146s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-133977
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-133977: (6.187268295s)
--- PASS: TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 26.163484ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-x4fzp" [43297a34-7c32-4122-9147-d2d3f776506e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006345328s
addons_test.go:415: (dbg) Run:  kubectl --context addons-133977 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable metrics-server --alsologtostderr -v=1: (1.014579483s)
--- PASS: TestAddons/parallel/MetricsServer (7.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.481437ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jl58n" [9d6d3cc6-9f52-4331-a5cc-860d379851ff] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005211638s
addons_test.go:473: (dbg) Run:  kubectl --context addons-133977 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-133977 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.081396126s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable helm-tiller --alsologtostderr -v=1: (1.071155295s)
--- PASS: TestAddons/parallel/HelmTiller (15.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.925256ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-133977 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-133977 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4174399c-2eea-4d1c-a7c3-e9e121816365] Pending
helpers_test.go:344: "task-pv-pod" [4174399c-2eea-4d1c-a7c3-e9e121816365] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4174399c-2eea-4d1c-a7c3-e9e121816365] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005788042s
addons_test.go:584: (dbg) Run:  kubectl --context addons-133977 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-133977 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-133977 delete pod task-pv-pod: (1.077661867s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-133977 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-133977 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-133977 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26fd0f34-80c4-43c5-b183-30ef01f42bb9] Pending
helpers_test.go:344: "task-pv-pod-restore" [26fd0f34-80c4-43c5-b183-30ef01f42bb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26fd0f34-80c4-43c5-b183-30ef01f42bb9] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005040616s
addons_test.go:626: (dbg) Run:  kubectl --context addons-133977 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-133977 delete pod task-pv-pod-restore: (1.596113407s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-133977 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-133977 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.491066186s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-133977 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-133977 --alsologtostderr -v=1: (1.577517872s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-rgkqx" [683c54e1-1779-43f8-9eb3-225aab705fc8] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-rgkqx" [683c54e1-1779-43f8-9eb3-225aab705fc8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rgkqx" [683c54e1-1779-43f8-9eb3-225aab705fc8] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rgkqx" [683c54e1-1779-43f8-9eb3-225aab705fc8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00432469s
--- PASS: TestAddons/parallel/Headlamp (12.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-gfdt9" [2e8bfd68-0cfb-4bb3-b192-97355ab7aa52] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003905096s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-133977
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-133977 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-133977 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1452b407-e53e-4ed3-8b67-821df79ab24e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1452b407-e53e-4ed3-8b67-821df79ab24e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1452b407-e53e-4ed3-8b67-821df79ab24e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00503286s
addons_test.go:891: (dbg) Run:  kubectl --context addons-133977 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 ssh "cat /opt/local-path-provisioner/pvc-bbaf009a-9d85-4a80-afb5-fe15879d7219_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-133977 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-133977 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-133977 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-133977 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.587043746s)
--- PASS: TestAddons/parallel/LocalPath (55.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bczv7" [19af4a51-522e-44b5-9d02-bb291a9d7704] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.180571008s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-133977
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qz226" [c31cdb79-5f13-4b5c-aa4d-a437028ea2ed] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.179964129s
--- PASS: TestAddons/parallel/Yakd (6.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-133977 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-133977 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-133977
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-133977: (1m32.186409041s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-133977
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-133977
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-133977
--- PASS: TestAddons/StoppedEnableDisable (92.53s)

                                                
                                    
x
+
TestCertOptions (65.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-974015 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-974015 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m3.490589113s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-974015 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-974015 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-974015 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-974015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-974015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-974015: (1.054991906s)
--- PASS: TestCertOptions (65.06s)

                                                
                                    
x
+
TestCertExpiration (290.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-985631 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-985631 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m14.210382546s)
E0116 03:06:12.689714  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-985631 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-985631 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (34.852388788s)
helpers_test.go:175: Cleaning up "cert-expiration-985631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-985631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-985631: (1.197349196s)
--- PASS: TestCertExpiration (290.26s)

                                                
                                    
x
+
TestForceSystemdFlag (109.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-058974 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-058974 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m47.935822105s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-058974 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-058974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-058974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-058974: (1.087105125s)
--- PASS: TestForceSystemdFlag (109.25s)

                                                
                                    
x
+
TestForceSystemdEnv (51.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-972159 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-972159 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (49.223717304s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-972159 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-972159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-972159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-972159: (1.674482679s)
--- PASS: TestForceSystemdEnv (51.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                    
x
+
TestErrorSpam/setup (47.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-373246 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-373246 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-373246 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-373246 --driver=kvm2  --container-runtime=containerd: (47.910851907s)
--- PASS: TestErrorSpam/setup (47.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 stop: (2.102776816s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-373246 --log_dir /tmp/nospam-373246 stop
--- PASS: TestErrorSpam/stop (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17965-330687/.minikube/files/etc/test/nested/copy/337873/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0116 02:35:04.673982  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.679867  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.690175  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.710559  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.750921  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.831232  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:04.991710  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:05.312501  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-084612 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m1.829764832s)
--- PASS: TestFunctional/serial/StartWithProxy (61.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --alsologtostderr -v=8
E0116 02:35:05.953023  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:07.233962  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:09.794512  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-084612 --alsologtostderr -v=8: (6.466553178s)
functional_test.go:659: soft start took 6.467447657s for "functional-084612" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-084612 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:3.1: (1.267438219s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:3.3
E0116 02:35:14.915463  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:3.3: (1.301721347s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 cache add registry.k8s.io/pause:latest: (1.265507895s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-084612 /tmp/TestFunctionalserialCacheCmdcacheadd_local2790232505/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache add minikube-local-cache-test:functional-084612
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 cache add minikube-local-cache-test:functional-084612: (1.022733189s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache delete minikube-local-cache-test:functional-084612
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-084612
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (247.488419ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 cache reload: (1.31038521s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 kubectl -- --context functional-084612 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-084612 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0116 02:35:25.156594  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:35:45.637032  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-084612 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.802038819s)
functional_test.go:757: restart took 43.802186673s for "functional-084612" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-084612 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 logs: (1.570754787s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 logs --file /tmp/TestFunctionalserialLogsFileCmd3732633155/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 logs --file /tmp/TestFunctionalserialLogsFileCmd3732633155/001/logs.txt: (1.564072216s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-084612 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-084612
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-084612: exit status 115 (320.238889ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.87:31016 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-084612 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 config get cpus: exit status 14 (76.013707ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 config get cpus: exit status 14 (63.487187ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-084612 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-084612 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 344950: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-084612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (166.034165ms)

                                                
                                                
-- stdout --
	* [functional-084612] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:36:40.823171  344723 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:36:40.823300  344723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:40.823322  344723 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:40.823330  344723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:40.823612  344723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:36:40.824218  344723 out.go:303] Setting JSON to false
	I0116 02:36:40.825321  344723 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":33553,"bootTime":1705339048,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:36:40.825411  344723 start.go:138] virtualization: kvm guest
	I0116 02:36:40.827964  344723 out.go:177] * [functional-084612] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:36:40.829781  344723 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:36:40.829829  344723 notify.go:220] Checking for updates...
	I0116 02:36:40.831571  344723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:36:40.833566  344723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:36:40.835535  344723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:36:40.837076  344723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:36:40.838912  344723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:36:40.841141  344723 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:36:40.841789  344723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:36:40.841863  344723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:36:40.858102  344723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0116 02:36:40.858713  344723 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:36:40.859336  344723 main.go:141] libmachine: Using API Version  1
	I0116 02:36:40.859365  344723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:36:40.859830  344723 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:36:40.860060  344723 main.go:141] libmachine: (functional-084612) Calling .DriverName
	I0116 02:36:40.860398  344723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:36:40.860861  344723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:36:40.860920  344723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:36:40.879568  344723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34979
	I0116 02:36:40.880181  344723 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:36:40.880694  344723 main.go:141] libmachine: Using API Version  1
	I0116 02:36:40.880720  344723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:36:40.881091  344723 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:36:40.881308  344723 main.go:141] libmachine: (functional-084612) Calling .DriverName
	I0116 02:36:40.917990  344723 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 02:36:40.919680  344723 start.go:298] selected driver: kvm2
	I0116 02:36:40.919707  344723 start.go:902] validating driver "kvm2" against &{Name:functional-084612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-084612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.87 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:36:40.919828  344723 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:36:40.922397  344723 out.go:177] 
	W0116 02:36:40.924146  344723 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 02:36:40.925862  344723 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-084612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-084612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (159.523572ms)

                                                
                                                
-- stdout --
	* [functional-084612] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:36:41.146577  344778 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:36:41.146733  344778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:41.146746  344778 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:41.146753  344778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:41.147068  344778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:36:41.147653  344778 out.go:303] Setting JSON to false
	I0116 02:36:41.148673  344778 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":33553,"bootTime":1705339048,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:36:41.148750  344778 start.go:138] virtualization: kvm guest
	I0116 02:36:41.151270  344778 out.go:177] * [functional-084612] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 02:36:41.153036  344778 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:36:41.153168  344778 notify.go:220] Checking for updates...
	I0116 02:36:41.154880  344778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:36:41.156940  344778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 02:36:41.158875  344778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 02:36:41.160570  344778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:36:41.162190  344778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:36:41.164655  344778 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:36:41.165380  344778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:36:41.165451  344778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:36:41.180663  344778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0116 02:36:41.181177  344778 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:36:41.181713  344778 main.go:141] libmachine: Using API Version  1
	I0116 02:36:41.181746  344778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:36:41.182119  344778 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:36:41.182364  344778 main.go:141] libmachine: (functional-084612) Calling .DriverName
	I0116 02:36:41.182676  344778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:36:41.183015  344778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:36:41.183062  344778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:36:41.198081  344778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0116 02:36:41.198635  344778 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:36:41.199320  344778 main.go:141] libmachine: Using API Version  1
	I0116 02:36:41.199352  344778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:36:41.199742  344778 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:36:41.199994  344778 main.go:141] libmachine: (functional-084612) Calling .DriverName
	I0116 02:36:41.235425  344778 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0116 02:36:41.237221  344778 start.go:298] selected driver: kvm2
	I0116 02:36:41.237245  344778 start.go:902] validating driver "kvm2" against &{Name:functional-084612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-084612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.87 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:36:41.237387  344778 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:36:41.239697  344778 out.go:177] 
	W0116 02:36:41.241301  344778 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 02:36:41.242875  344778 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-084612 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-084612 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-5s92c" [427e58c9-5e80-4da8-8f6f-92d4083416a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-5s92c" [427e58c9-5e80-4da8-8f6f-92d4083416a2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005060808s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.87:30562
functional_test.go:1674: http://192.168.39.87:30562: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-5s92c

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.87:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.87:30562
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9c3be8be-0430-4b76-92c6-d5c8649cb8a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005400807s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-084612 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-084612 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-084612 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-084612 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-084612 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e6306fb-cc05-4673-889e-8c17a60df9c1] Pending
helpers_test.go:344: "sp-pod" [3e6306fb-cc05-4673-889e-8c17a60df9c1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0116 02:36:26.597968  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [3e6306fb-cc05-4673-889e-8c17a60df9c1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.043953153s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-084612 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-084612 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-084612 delete -f testdata/storage-provisioner/pod.yaml: (1.842052731s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-084612 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e21ac395-78dc-49f2-b1fa-65912f6624a6] Pending
helpers_test.go:344: "sp-pod" [e21ac395-78dc-49f2-b1fa-65912f6624a6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e21ac395-78dc-49f2-b1fa-65912f6624a6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004467625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-084612 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh -n functional-084612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cp functional-084612:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2466261729/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh -n functional-084612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh -n functional-084612 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-084612 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-q2jss" [b1567389-9542-47dd-8e79-49876d30bbaf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-q2jss" [b1567389-9542-47dd-8e79-49876d30bbaf] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004812378s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;": exit status 1 (194.795744ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;": exit status 1 (242.818741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;": exit status 1 (356.95949ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;": exit status 1 (220.786938ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-084612 exec mysql-859648c796-q2jss -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/337873/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /etc/test/nested/copy/337873/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/337873.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /etc/ssl/certs/337873.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/337873.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /usr/share/ca-certificates/337873.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3378732.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /etc/ssl/certs/3378732.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/3378732.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /usr/share/ca-certificates/3378732.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-084612 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "sudo systemctl is-active docker": exit status 1 (276.140696ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "sudo systemctl is-active crio": exit status 1 (253.040647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-084612 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-084612
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-084612
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-084612 image ls --format short --alsologtostderr:
I0116 02:36:44.962641  345053 out.go:296] Setting OutFile to fd 1 ...
I0116 02:36:44.962815  345053 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:44.962825  345053 out.go:309] Setting ErrFile to fd 2...
I0116 02:36:44.962829  345053 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:44.963052  345053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
I0116 02:36:44.963739  345053 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:44.963892  345053 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:44.964344  345053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:44.964419  345053 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:44.979709  345053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
I0116 02:36:44.980213  345053 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:44.980824  345053 main.go:141] libmachine: Using API Version  1
I0116 02:36:44.980860  345053 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:44.981216  345053 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:44.981421  345053 main.go:141] libmachine: (functional-084612) Calling .GetState
I0116 02:36:44.983421  345053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:44.983469  345053 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:44.999431  345053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
I0116 02:36:44.999930  345053 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:45.000520  345053 main.go:141] libmachine: Using API Version  1
I0116 02:36:45.000552  345053 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:45.000882  345053 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:45.001053  345053 main.go:141] libmachine: (functional-084612) Calling .DriverName
I0116 02:36:45.001220  345053 ssh_runner.go:195] Run: systemctl --version
I0116 02:36:45.001246  345053 main.go:141] libmachine: (functional-084612) Calling .GetSSHHostname
I0116 02:36:45.004157  345053 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.004498  345053 main.go:141] libmachine: (functional-084612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:65:b1", ip: ""} in network mk-functional-084612: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:20 +0000 UTC Type:0 Mac:52:54:00:28:65:b1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-084612 Clientid:01:52:54:00:28:65:b1}
I0116 02:36:45.004528  345053 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined IP address 192.168.39.87 and MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.004718  345053 main.go:141] libmachine: (functional-084612) Calling .GetSSHPort
I0116 02:36:45.004948  345053 main.go:141] libmachine: (functional-084612) Calling .GetSSHKeyPath
I0116 02:36:45.005140  345053 main.go:141] libmachine: (functional-084612) Calling .GetSSHUsername
I0116 02:36:45.005278  345053 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/functional-084612/id_rsa Username:docker}
I0116 02:36:45.097670  345053 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:36:45.151568  345053 main.go:141] libmachine: Making call to close driver server
I0116 02:36:45.151588  345053 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:45.151891  345053 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:45.151918  345053 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:45.151929  345053 main.go:141] libmachine: Making call to close driver server
I0116 02:36:45.151939  345053 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:45.152284  345053 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:45.152338  345053 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:45.152300  345053 main.go:141] libmachine: (functional-084612) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-084612 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-084612  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-084612  | sha256:aa68f3 | 1kB    |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/nginx                     | latest             | sha256:a87587 | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-084612  | sha256:aa53c5 | 775kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-084612 image ls --format table --alsologtostderr:
I0116 02:36:50.114566  345411 out.go:296] Setting OutFile to fd 1 ...
I0116 02:36:50.114713  345411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:50.114724  345411 out.go:309] Setting ErrFile to fd 2...
I0116 02:36:50.114728  345411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:50.114933  345411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
I0116 02:36:50.115591  345411 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:50.115701  345411 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:50.116096  345411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:50.116157  345411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:50.132543  345411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
I0116 02:36:50.133118  345411 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:50.133833  345411 main.go:141] libmachine: Using API Version  1
I0116 02:36:50.133871  345411 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:50.134264  345411 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:50.134542  345411 main.go:141] libmachine: (functional-084612) Calling .GetState
I0116 02:36:50.136565  345411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:50.136614  345411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:50.151987  345411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
I0116 02:36:50.152541  345411 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:50.153101  345411 main.go:141] libmachine: Using API Version  1
I0116 02:36:50.153139  345411 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:50.153448  345411 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:50.153650  345411 main.go:141] libmachine: (functional-084612) Calling .DriverName
I0116 02:36:50.153890  345411 ssh_runner.go:195] Run: systemctl --version
I0116 02:36:50.153923  345411 main.go:141] libmachine: (functional-084612) Calling .GetSSHHostname
I0116 02:36:50.157136  345411 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:50.157707  345411 main.go:141] libmachine: (functional-084612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:65:b1", ip: ""} in network mk-functional-084612: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:20 +0000 UTC Type:0 Mac:52:54:00:28:65:b1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-084612 Clientid:01:52:54:00:28:65:b1}
I0116 02:36:50.157755  345411 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined IP address 192.168.39.87 and MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:50.158009  345411 main.go:141] libmachine: (functional-084612) Calling .GetSSHPort
I0116 02:36:50.158270  345411 main.go:141] libmachine: (functional-084612) Calling .GetSSHKeyPath
I0116 02:36:50.158477  345411 main.go:141] libmachine: (functional-084612) Calling .GetSSHUsername
I0116 02:36:50.158687  345411 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/functional-084612/id_rsa Username:docker}
I0116 02:36:50.246583  345411 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:36:50.301718  345411 main.go:141] libmachine: Making call to close driver server
I0116 02:36:50.301734  345411 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:50.302020  345411 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:50.302044  345411 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:50.302054  345411 main.go:141] libmachine: Making call to close driver server
I0116 02:36:50.302063  345411 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:50.302285  345411 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:50.302300  345411 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:50.302322  345411 main.go:141] libmachine: (functional-084612) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-084612 image ls --format json --alsologtostderr:
[{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:aa68f33e84532cbdbaf95905fd1ceff13e2c339e7efa10c432c72efd28d8c8e1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-084612"],"size":"1005"
},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"
,"repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-084612"],"size":"10823156"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["regist
ry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e
4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:aa53c5291323db339d32fffac43efe85508d4a01b14f5bc338733b13cbccc521","repoDigests":[],"repoTags":["localhost/my-image:functional-084612"],"size":"774900"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-084612 image ls --format json --alsologtostderr:
I0116 02:36:50.380060  345453 out.go:296] Setting OutFile to fd 1 ...
I0116 02:36:50.380271  345453 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:50.380285  345453 out.go:309] Setting ErrFile to fd 2...
I0116 02:36:50.380293  345453 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:50.380524  345453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
I0116 02:36:50.381213  345453 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:50.381322  345453 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:50.381699  345453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:50.381763  345453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:50.397611  345453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
I0116 02:36:50.398175  345453 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:50.398901  345453 main.go:141] libmachine: Using API Version  1
I0116 02:36:50.398938  345453 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:50.399331  345453 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:50.399560  345453 main.go:141] libmachine: (functional-084612) Calling .GetState
I0116 02:36:50.401414  345453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:50.401495  345453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:50.416972  345453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
I0116 02:36:50.417469  345453 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:50.417974  345453 main.go:141] libmachine: Using API Version  1
I0116 02:36:50.418004  345453 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:50.418408  345453 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:50.418634  345453 main.go:141] libmachine: (functional-084612) Calling .DriverName
I0116 02:36:50.418882  345453 ssh_runner.go:195] Run: systemctl --version
I0116 02:36:50.418928  345453 main.go:141] libmachine: (functional-084612) Calling .GetSSHHostname
I0116 02:36:50.422267  345453 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:50.422742  345453 main.go:141] libmachine: (functional-084612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:65:b1", ip: ""} in network mk-functional-084612: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:20 +0000 UTC Type:0 Mac:52:54:00:28:65:b1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-084612 Clientid:01:52:54:00:28:65:b1}
I0116 02:36:50.422777  345453 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined IP address 192.168.39.87 and MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:50.422922  345453 main.go:141] libmachine: (functional-084612) Calling .GetSSHPort
I0116 02:36:50.423138  345453 main.go:141] libmachine: (functional-084612) Calling .GetSSHKeyPath
I0116 02:36:50.423324  345453 main.go:141] libmachine: (functional-084612) Calling .GetSSHUsername
I0116 02:36:50.423479  345453 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/functional-084612/id_rsa Username:docker}
I0116 02:36:50.525622  345453 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:36:50.589678  345453 main.go:141] libmachine: Making call to close driver server
I0116 02:36:50.589711  345453 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:50.589999  345453 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:50.590017  345453 main.go:141] libmachine: (functional-084612) DBG | Closing plugin on server side
I0116 02:36:50.590025  345453 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:50.590044  345453 main.go:141] libmachine: Making call to close driver server
I0116 02:36:50.590055  345453 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:50.590264  345453 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:50.590277  345453 main.go:141] libmachine: (functional-084612) DBG | Closing plugin on server side
I0116 02:36:50.590280  345453 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-084612 image ls --format yaml --alsologtostderr:
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:aa68f33e84532cbdbaf95905fd1ceff13e2c339e7efa10c432c72efd28d8c8e1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-084612
size: "1005"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-084612
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-084612 image ls --format yaml --alsologtostderr:
I0116 02:36:45.235082  345087 out.go:296] Setting OutFile to fd 1 ...
I0116 02:36:45.235391  345087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:45.235402  345087 out.go:309] Setting ErrFile to fd 2...
I0116 02:36:45.235407  345087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:45.235671  345087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
I0116 02:36:45.236441  345087 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:45.236549  345087 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:45.236958  345087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:45.237019  345087 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:45.252484  345087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
I0116 02:36:45.253003  345087 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:45.253675  345087 main.go:141] libmachine: Using API Version  1
I0116 02:36:45.253720  345087 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:45.254161  345087 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:45.254407  345087 main.go:141] libmachine: (functional-084612) Calling .GetState
I0116 02:36:45.256585  345087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:45.256653  345087 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:45.272177  345087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
I0116 02:36:45.272728  345087 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:45.273216  345087 main.go:141] libmachine: Using API Version  1
I0116 02:36:45.273242  345087 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:45.273627  345087 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:45.273833  345087 main.go:141] libmachine: (functional-084612) Calling .DriverName
I0116 02:36:45.274119  345087 ssh_runner.go:195] Run: systemctl --version
I0116 02:36:45.274157  345087 main.go:141] libmachine: (functional-084612) Calling .GetSSHHostname
I0116 02:36:45.277639  345087 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.278128  345087 main.go:141] libmachine: (functional-084612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:65:b1", ip: ""} in network mk-functional-084612: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:20 +0000 UTC Type:0 Mac:52:54:00:28:65:b1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-084612 Clientid:01:52:54:00:28:65:b1}
I0116 02:36:45.278157  345087 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined IP address 192.168.39.87 and MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.278396  345087 main.go:141] libmachine: (functional-084612) Calling .GetSSHPort
I0116 02:36:45.278640  345087 main.go:141] libmachine: (functional-084612) Calling .GetSSHKeyPath
I0116 02:36:45.278810  345087 main.go:141] libmachine: (functional-084612) Calling .GetSSHUsername
I0116 02:36:45.278990  345087 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/functional-084612/id_rsa Username:docker}
I0116 02:36:45.374226  345087 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:36:45.472875  345087 main.go:141] libmachine: Making call to close driver server
I0116 02:36:45.472896  345087 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:45.473235  345087 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:45.473252  345087 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:45.473261  345087 main.go:141] libmachine: Making call to close driver server
I0116 02:36:45.473269  345087 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:45.473606  345087 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:45.473638  345087 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh pgrep buildkitd: exit status 1 (245.887527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image build -t localhost/my-image:functional-084612 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image build -t localhost/my-image:functional-084612 testdata/build --alsologtostderr: (4.05038399s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-084612 image build -t localhost/my-image:functional-084612 testdata/build --alsologtostderr:
I0116 02:36:45.792152  345139 out.go:296] Setting OutFile to fd 1 ...
I0116 02:36:45.792306  345139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:45.792316  345139 out.go:309] Setting ErrFile to fd 2...
I0116 02:36:45.792321  345139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:36:45.792532  345139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
I0116 02:36:45.793167  345139 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:45.793810  345139 config.go:182] Loaded profile config "functional-084612": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:36:45.794237  345139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:45.794317  345139 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:45.809877  345139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
I0116 02:36:45.810403  345139 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:45.811019  345139 main.go:141] libmachine: Using API Version  1
I0116 02:36:45.811043  345139 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:45.811470  345139 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:45.811684  345139 main.go:141] libmachine: (functional-084612) Calling .GetState
I0116 02:36:45.813559  345139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:36:45.813608  345139 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:36:45.829070  345139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
I0116 02:36:45.829568  345139 main.go:141] libmachine: () Calling .GetVersion
I0116 02:36:45.830079  345139 main.go:141] libmachine: Using API Version  1
I0116 02:36:45.830100  345139 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:36:45.830515  345139 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:36:45.830741  345139 main.go:141] libmachine: (functional-084612) Calling .DriverName
I0116 02:36:45.831017  345139 ssh_runner.go:195] Run: systemctl --version
I0116 02:36:45.831046  345139 main.go:141] libmachine: (functional-084612) Calling .GetSSHHostname
I0116 02:36:45.834024  345139 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.834512  345139 main.go:141] libmachine: (functional-084612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:65:b1", ip: ""} in network mk-functional-084612: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:20 +0000 UTC Type:0 Mac:52:54:00:28:65:b1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-084612 Clientid:01:52:54:00:28:65:b1}
I0116 02:36:45.834546  345139 main.go:141] libmachine: (functional-084612) DBG | domain functional-084612 has defined IP address 192.168.39.87 and MAC address 52:54:00:28:65:b1 in network mk-functional-084612
I0116 02:36:45.834770  345139 main.go:141] libmachine: (functional-084612) Calling .GetSSHPort
I0116 02:36:45.834975  345139 main.go:141] libmachine: (functional-084612) Calling .GetSSHKeyPath
I0116 02:36:45.835155  345139 main.go:141] libmachine: (functional-084612) Calling .GetSSHUsername
I0116 02:36:45.835330  345139 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/functional-084612/id_rsa Username:docker}
I0116 02:36:45.937082  345139 build_images.go:151] Building image from path: /tmp/build.3081157885.tar
I0116 02:36:45.937172  345139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:36:45.959915  345139 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3081157885.tar
I0116 02:36:45.968317  345139 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3081157885.tar: stat -c "%s %y" /var/lib/minikube/build/build.3081157885.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3081157885.tar': No such file or directory
I0116 02:36:45.968358  345139 ssh_runner.go:362] scp /tmp/build.3081157885.tar --> /var/lib/minikube/build/build.3081157885.tar (3072 bytes)
I0116 02:36:46.011897  345139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3081157885
I0116 02:36:46.024521  345139 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3081157885 -xf /var/lib/minikube/build/build.3081157885.tar
I0116 02:36:46.044057  345139 containerd.go:379] Building image: /var/lib/minikube/build/build.3081157885
I0116 02:36:46.044142  345139 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3081157885 --local dockerfile=/var/lib/minikube/build/build.3081157885 --output type=image,name=localhost/my-image:functional-084612
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:3c55e055c731be4ba2574fe4e149f3311206dd49be90c253ac731c9dafbc64a0 0.0s done
#8 exporting config sha256:aa53c5291323db339d32fffac43efe85508d4a01b14f5bc338733b13cbccc521 0.0s done
#8 naming to localhost/my-image:functional-084612
#8 naming to localhost/my-image:functional-084612 done
#8 DONE 0.3s
I0116 02:36:49.732705  345139 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3081157885 --local dockerfile=/var/lib/minikube/build/build.3081157885 --output type=image,name=localhost/my-image:functional-084612: (3.688532432s)
I0116 02:36:49.732772  345139 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3081157885
I0116 02:36:49.750476  345139 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3081157885.tar
I0116 02:36:49.766902  345139 build_images.go:207] Built localhost/my-image:functional-084612 from /tmp/build.3081157885.tar
I0116 02:36:49.766948  345139 build_images.go:123] succeeded building to: functional-084612
I0116 02:36:49.766955  345139 build_images.go:124] failed building to: 
I0116 02:36:49.766998  345139 main.go:141] libmachine: Making call to close driver server
I0116 02:36:49.767019  345139 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:49.767375  345139 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:49.767410  345139 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:36:49.767422  345139 main.go:141] libmachine: Making call to close driver server
I0116 02:36:49.767433  345139 main.go:141] libmachine: (functional-084612) Calling .Close
I0116 02:36:49.769240  345139 main.go:141] libmachine: (functional-084612) DBG | Closing plugin on server side
I0116 02:36:49.769331  345139 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:36:49.769369  345139 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.05337746s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-084612
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr: (4.672491747s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-084612 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-084612 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8475l" [283c3198-d1bb-48c6-aa3f-0266a86f4534] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8475l" [283c3198-d1bb-48c6-aa3f-0266a86f4534] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.011255818s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr: (2.663468016s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-084612
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image load --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr: (5.43988193s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image save gcr.io/google-containers/addon-resizer:functional-084612 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image save gcr.io/google-containers/addon-resizer:functional-084612 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.423995781s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image rm gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.618450355s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-084612
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 image save --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-084612 image save --daemon gcr.io/google-containers/addon-resizer:functional-084612 --alsologtostderr: (1.400251035s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-084612
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service list -o json
functional_test.go:1493: Took "463.432507ms" to run "out/minikube-linux-amd64 -p functional-084612 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.87:30214
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.87:30214
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "233.41331ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "64.454217ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "223.11459ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "62.146179ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdany-port1838561505/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705372600537852041" to /tmp/TestFunctionalparallelMountCmdany-port1838561505/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705372600537852041" to /tmp/TestFunctionalparallelMountCmdany-port1838561505/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705372600537852041" to /tmp/TestFunctionalparallelMountCmdany-port1838561505/001/test-1705372600537852041
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.241947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 02:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 02:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 02:36 test-1705372600537852041
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh cat /mount-9p/test-1705372600537852041
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-084612 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [40d51cd6-3037-43c0-aa7f-f225a215fa23] Pending
helpers_test.go:344: "busybox-mount" [40d51cd6-3037-43c0-aa7f-f225a215fa23] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [40d51cd6-3037-43c0-aa7f-f225a215fa23] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [40d51cd6-3037-43c0-aa7f-f225a215fa23] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003959275s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-084612 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdany-port1838561505/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdspecific-port2280755572/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.560257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdspecific-port2280755572/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "sudo umount -f /mount-9p": exit status 1 (236.326733ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-084612 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdspecific-port2280755572/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T" /mount1: exit status 1 (308.372733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-084612 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-084612 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-084612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1767504591/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/01/16 02:36:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-084612
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-084612
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-084612
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-699477 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0116 02:37:48.519676  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-699477 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m18.768753768s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons enable ingress --alsologtostderr -v=5: (11.185684067s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (42.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-699477 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-699477 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.515077521s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-699477 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-699477 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7d8e3607-38b9-4c61-af45-1f4b92a80bcb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7d8e3607-38b9-4c61-af45-1f4b92a80bcb] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.004650373s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-699477 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.223
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons disable ingress-dns --alsologtostderr -v=1: (7.357319789s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-699477 addons disable ingress --alsologtostderr -v=1: (7.638737487s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (42.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-949419 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0116 02:40:04.673535  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-949419 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m4.864219988s)
--- PASS: TestJSONOutput/start/Command (64.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-949419 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-949419 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-949419 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-949419 --output=json --user=testUser: (7.11460809s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-904817 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-904817 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.330742ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5631c1e-ab7d-4619-a58d-603cb5b03d30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-904817] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a68e40c0-7020-4912-9bb3-858bf581ea8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"725c281d-cbcf-4f9d-ad61-c79fa6af5791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"95ff8f5a-d8b1-45a7-87f2-00b3d352bab4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig"}}
	{"specversion":"1.0","id":"2545651f-16ec-4498-88f8-f0f8fd97619d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube"}}
	{"specversion":"1.0","id":"1fd1ee51-2171-4912-9d04-11f87fb33872","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4995e3be-0868-43cf-9919-528ea0c34929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fa53f07b-c7e2-4b29-a349-2c562c8649c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-904817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-904817
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (102.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-447541 --driver=kvm2  --container-runtime=containerd
E0116 02:40:32.361313  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 02:41:12.689313  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:12.694679  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:12.704986  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:12.725361  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:12.765824  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:12.846301  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:13.006779  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:13.327424  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:13.968447  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:15.248641  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:17.810472  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-447541 --driver=kvm2  --container-runtime=containerd: (49.746724811s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-452418 --driver=kvm2  --container-runtime=containerd
E0116 02:41:22.931658  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:33.172302  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:41:53.652757  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-452418 --driver=kvm2  --container-runtime=containerd: (50.040558437s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-447541
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-452418
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-452418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-452418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-452418: (1.036201372s)
helpers_test.go:175: Cleaning up "first-447541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-447541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-447541: (1.033686137s)
--- PASS: TestMinikubeProfile (102.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-888211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0116 02:42:34.614598  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-888211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.619287797s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-888211 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-888211 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904233 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904233 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.589709185s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-888211 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-904233
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-904233: (1.215523374s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904233
E0116 02:43:32.389735  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.395006  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.405330  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.425675  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.466012  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.546352  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:32.706930  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:33.027528  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:33.668509  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:34.949140  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904233: (21.591989067s)
E0116 02:43:37.510202  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (22.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904233 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-513593 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0116 02:43:42.631334  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:52.872503  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:43:56.535912  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:44:13.352961  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:44:54.313368  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:45:04.674158  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-513593 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m54.907195601s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-513593 -- rollout status deployment/busybox: (4.454974956s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-cwd4k -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-nwq7k -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-cwd4k -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-nwq7k -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-cwd4k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-nwq7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-cwd4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-cwd4k -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-nwq7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-513593 -- exec busybox-5bc68d56bd-nwq7k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-513593 -v 3 --alsologtostderr
E0116 02:46:12.689747  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:46:16.234076  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-513593 -v 3 --alsologtostderr: (40.793251477s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-513593 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp testdata/cp-test.txt multinode-513593:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile508832495/001/cp-test_multinode-513593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593:/home/docker/cp-test.txt multinode-513593-m02:/home/docker/cp-test_multinode-513593_multinode-513593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test_multinode-513593_multinode-513593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593:/home/docker/cp-test.txt multinode-513593-m03:/home/docker/cp-test_multinode-513593_multinode-513593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test_multinode-513593_multinode-513593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp testdata/cp-test.txt multinode-513593-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile508832495/001/cp-test_multinode-513593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m02:/home/docker/cp-test.txt multinode-513593:/home/docker/cp-test_multinode-513593-m02_multinode-513593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test_multinode-513593-m02_multinode-513593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m02:/home/docker/cp-test.txt multinode-513593-m03:/home/docker/cp-test_multinode-513593-m02_multinode-513593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test_multinode-513593-m02_multinode-513593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp testdata/cp-test.txt multinode-513593-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile508832495/001/cp-test_multinode-513593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m03:/home/docker/cp-test.txt multinode-513593:/home/docker/cp-test_multinode-513593-m03_multinode-513593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593 "sudo cat /home/docker/cp-test_multinode-513593-m03_multinode-513593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 cp multinode-513593-m03:/home/docker/cp-test.txt multinode-513593-m02:/home/docker/cp-test_multinode-513593-m03_multinode-513593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 ssh -n multinode-513593-m02 "sudo cat /home/docker/cp-test_multinode-513593-m03_multinode-513593-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-513593 node stop m03: (1.328585675s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-513593 status: exit status 7 (471.222142ms)

                                                
                                                
-- stdout --
	multinode-513593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-513593-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-513593-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr: exit status 7 (472.406206ms)

                                                
                                                
-- stdout --
	multinode-513593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-513593-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-513593-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:46:34.422105  352071 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:34.422350  352071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:34.422358  352071 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:34.422363  352071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:34.422605  352071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:46:34.422808  352071 out.go:303] Setting JSON to false
	I0116 02:46:34.422850  352071 mustload.go:65] Loading cluster: multinode-513593
	I0116 02:46:34.422975  352071 notify.go:220] Checking for updates...
	I0116 02:46:34.423372  352071 config.go:182] Loaded profile config "multinode-513593": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:46:34.423396  352071 status.go:255] checking status of multinode-513593 ...
	I0116 02:46:34.423865  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.423934  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.443109  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46715
	I0116 02:46:34.443626  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.444209  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.444233  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.444638  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.444864  352071 main.go:141] libmachine: (multinode-513593) Calling .GetState
	I0116 02:46:34.446572  352071 status.go:330] multinode-513593 host status = "Running" (err=<nil>)
	I0116 02:46:34.446590  352071 host.go:66] Checking if "multinode-513593" exists ...
	I0116 02:46:34.446965  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.447057  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.463538  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0116 02:46:34.464010  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.464480  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.464501  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.464825  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.465010  352071 main.go:141] libmachine: (multinode-513593) Calling .GetIP
	I0116 02:46:34.467618  352071 main.go:141] libmachine: (multinode-513593) DBG | domain multinode-513593 has defined MAC address 52:54:00:c0:01:a0 in network mk-multinode-513593
	I0116 02:46:34.468076  352071 main.go:141] libmachine: (multinode-513593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:01:a0", ip: ""} in network mk-multinode-513593: {Iface:virbr1 ExpiryTime:2024-01-16 03:43:56 +0000 UTC Type:0 Mac:52:54:00:c0:01:a0 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-513593 Clientid:01:52:54:00:c0:01:a0}
	I0116 02:46:34.468111  352071 main.go:141] libmachine: (multinode-513593) DBG | domain multinode-513593 has defined IP address 192.168.39.165 and MAC address 52:54:00:c0:01:a0 in network mk-multinode-513593
	I0116 02:46:34.468266  352071 host.go:66] Checking if "multinode-513593" exists ...
	I0116 02:46:34.468697  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.468753  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.485614  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0116 02:46:34.486098  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.486629  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.486655  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.487017  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.487234  352071 main.go:141] libmachine: (multinode-513593) Calling .DriverName
	I0116 02:46:34.487510  352071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:46:34.487546  352071 main.go:141] libmachine: (multinode-513593) Calling .GetSSHHostname
	I0116 02:46:34.490239  352071 main.go:141] libmachine: (multinode-513593) DBG | domain multinode-513593 has defined MAC address 52:54:00:c0:01:a0 in network mk-multinode-513593
	I0116 02:46:34.490676  352071 main.go:141] libmachine: (multinode-513593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:01:a0", ip: ""} in network mk-multinode-513593: {Iface:virbr1 ExpiryTime:2024-01-16 03:43:56 +0000 UTC Type:0 Mac:52:54:00:c0:01:a0 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-513593 Clientid:01:52:54:00:c0:01:a0}
	I0116 02:46:34.490710  352071 main.go:141] libmachine: (multinode-513593) DBG | domain multinode-513593 has defined IP address 192.168.39.165 and MAC address 52:54:00:c0:01:a0 in network mk-multinode-513593
	I0116 02:46:34.490863  352071 main.go:141] libmachine: (multinode-513593) Calling .GetSSHPort
	I0116 02:46:34.491056  352071 main.go:141] libmachine: (multinode-513593) Calling .GetSSHKeyPath
	I0116 02:46:34.491180  352071 main.go:141] libmachine: (multinode-513593) Calling .GetSSHUsername
	I0116 02:46:34.491327  352071 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/multinode-513593/id_rsa Username:docker}
	I0116 02:46:34.583374  352071 ssh_runner.go:195] Run: systemctl --version
	I0116 02:46:34.590241  352071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:46:34.605145  352071 kubeconfig.go:92] found "multinode-513593" server: "https://192.168.39.165:8443"
	I0116 02:46:34.605181  352071 api_server.go:166] Checking apiserver status ...
	I0116 02:46:34.605235  352071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:46:34.617464  352071 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup
	I0116 02:46:34.627283  352071 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod3020604f331f13cb1de80f3f3ed0a419/27990cd711b726596785de81a5abc2bb4b35e66888a27eb08098f831fc18e188"
	I0116 02:46:34.627368  352071 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3020604f331f13cb1de80f3f3ed0a419/27990cd711b726596785de81a5abc2bb4b35e66888a27eb08098f831fc18e188/freezer.state
	I0116 02:46:34.639886  352071 api_server.go:204] freezer state: "THAWED"
	I0116 02:46:34.639925  352071 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0116 02:46:34.645202  352071 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0116 02:46:34.645228  352071 status.go:421] multinode-513593 apiserver status = Running (err=<nil>)
	I0116 02:46:34.645239  352071 status.go:257] multinode-513593 status: &{Name:multinode-513593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:46:34.645256  352071 status.go:255] checking status of multinode-513593-m02 ...
	I0116 02:46:34.645560  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.645611  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.661178  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0116 02:46:34.661725  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.662254  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.662270  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.662654  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.662896  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetState
	I0116 02:46:34.664385  352071 status.go:330] multinode-513593-m02 host status = "Running" (err=<nil>)
	I0116 02:46:34.664407  352071 host.go:66] Checking if "multinode-513593-m02" exists ...
	I0116 02:46:34.664712  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.664755  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.680899  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0116 02:46:34.681439  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.681960  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.681981  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.682304  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.682506  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetIP
	I0116 02:46:34.685283  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | domain multinode-513593-m02 has defined MAC address 52:54:00:20:b1:02 in network mk-multinode-513593
	I0116 02:46:34.685663  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:02", ip: ""} in network mk-multinode-513593: {Iface:virbr1 ExpiryTime:2024-01-16 03:45:06 +0000 UTC Type:0 Mac:52:54:00:20:b1:02 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-513593-m02 Clientid:01:52:54:00:20:b1:02}
	I0116 02:46:34.685707  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | domain multinode-513593-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:20:b1:02 in network mk-multinode-513593
	I0116 02:46:34.685825  352071 host.go:66] Checking if "multinode-513593-m02" exists ...
	I0116 02:46:34.686188  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.686231  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.701830  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0116 02:46:34.702348  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.702838  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.702865  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.703219  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.703427  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .DriverName
	I0116 02:46:34.703663  352071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:46:34.703690  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetSSHHostname
	I0116 02:46:34.706902  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | domain multinode-513593-m02 has defined MAC address 52:54:00:20:b1:02 in network mk-multinode-513593
	I0116 02:46:34.707337  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:02", ip: ""} in network mk-multinode-513593: {Iface:virbr1 ExpiryTime:2024-01-16 03:45:06 +0000 UTC Type:0 Mac:52:54:00:20:b1:02 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-513593-m02 Clientid:01:52:54:00:20:b1:02}
	I0116 02:46:34.707375  352071 main.go:141] libmachine: (multinode-513593-m02) DBG | domain multinode-513593-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:20:b1:02 in network mk-multinode-513593
	I0116 02:46:34.707564  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetSSHPort
	I0116 02:46:34.707748  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetSSHKeyPath
	I0116 02:46:34.707901  352071 main.go:141] libmachine: (multinode-513593-m02) Calling .GetSSHUsername
	I0116 02:46:34.708043  352071 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-330687/.minikube/machines/multinode-513593-m02/id_rsa Username:docker}
	I0116 02:46:34.798749  352071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:46:34.813634  352071 status.go:257] multinode-513593-m02 status: &{Name:multinode-513593-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:46:34.813678  352071 status.go:255] checking status of multinode-513593-m03 ...
	I0116 02:46:34.814004  352071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:46:34.814054  352071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:34.829848  352071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0116 02:46:34.830323  352071 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:34.830817  352071 main.go:141] libmachine: Using API Version  1
	I0116 02:46:34.830842  352071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:34.831198  352071 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:34.831408  352071 main.go:141] libmachine: (multinode-513593-m03) Calling .GetState
	I0116 02:46:34.833308  352071 status.go:330] multinode-513593-m03 host status = "Stopped" (err=<nil>)
	I0116 02:46:34.833323  352071 status.go:343] host is not running, skipping remaining checks
	I0116 02:46:34.833328  352071 status.go:257] multinode-513593-m03 status: &{Name:multinode-513593-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 node start m03 --alsologtostderr
E0116 02:46:40.376723  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-513593 node start m03 --alsologtostderr: (31.136260483s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (319.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-513593
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-513593
E0116 02:48:32.389138  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:49:00.074535  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:50:04.674191  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-513593: (3m5.503440556s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-513593 --wait=true -v=8 --alsologtostderr
E0116 02:51:12.690298  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
E0116 02:51:27.722271  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-513593 --wait=true -v=8 --alsologtostderr: (2m13.785001964s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-513593
--- PASS: TestMultiNode/serial/RestartKeepsNodes (319.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-513593 node delete m03: (1.28599778s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 stop
E0116 02:53:32.389784  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:55:04.673789  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-513593 stop: (3m3.127521217s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-513593 status: exit status 7 (105.88838ms)

                                                
                                                
-- stdout --
	multinode-513593
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-513593-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr: exit status 7 (107.740111ms)

                                                
                                                
-- stdout --
	multinode-513593
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-513593-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:55:31.257854  354322 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:55:31.258192  354322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:55:31.258204  354322 out.go:309] Setting ErrFile to fd 2...
	I0116 02:55:31.258209  354322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:55:31.258438  354322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 02:55:31.258664  354322 out.go:303] Setting JSON to false
	I0116 02:55:31.258704  354322 mustload.go:65] Loading cluster: multinode-513593
	I0116 02:55:31.258768  354322 notify.go:220] Checking for updates...
	I0116 02:55:31.259288  354322 config.go:182] Loaded profile config "multinode-513593": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:31.259316  354322 status.go:255] checking status of multinode-513593 ...
	I0116 02:55:31.259830  354322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:55:31.259931  354322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:55:31.278445  354322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
	I0116 02:55:31.279278  354322 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:55:31.280570  354322 main.go:141] libmachine: Using API Version  1
	I0116 02:55:31.280606  354322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:55:31.280994  354322 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:55:31.281274  354322 main.go:141] libmachine: (multinode-513593) Calling .GetState
	I0116 02:55:31.282862  354322 status.go:330] multinode-513593 host status = "Stopped" (err=<nil>)
	I0116 02:55:31.282882  354322 status.go:343] host is not running, skipping remaining checks
	I0116 02:55:31.282890  354322 status.go:257] multinode-513593 status: &{Name:multinode-513593 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:55:31.282915  354322 status.go:255] checking status of multinode-513593-m02 ...
	I0116 02:55:31.283341  354322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:55:31.283392  354322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:55:31.298670  354322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0116 02:55:31.299164  354322 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:55:31.299681  354322 main.go:141] libmachine: Using API Version  1
	I0116 02:55:31.299710  354322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:55:31.300056  354322 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:55:31.300307  354322 main.go:141] libmachine: (multinode-513593-m02) Calling .GetState
	I0116 02:55:31.302508  354322 status.go:330] multinode-513593-m02 host status = "Stopped" (err=<nil>)
	I0116 02:55:31.302536  354322 status.go:343] host is not running, skipping remaining checks
	I0116 02:55:31.302542  354322 status.go:257] multinode-513593-m02 status: &{Name:multinode-513593-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-513593 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0116 02:56:12.689329  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-513593 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m36.352064645s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-513593 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-513593
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-513593-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-513593-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (82.676126ms)

                                                
                                                
-- stdout --
	* [multinode-513593-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-513593-m02' is duplicated with machine name 'multinode-513593-m02' in profile 'multinode-513593'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-513593-m03 --driver=kvm2  --container-runtime=containerd
E0116 02:57:35.737130  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-513593-m03 --driver=kvm2  --container-runtime=containerd: (48.351614561s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-513593
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-513593: exit status 80 (252.640448ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-513593
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-513593-m03 already exists in multinode-513593-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-513593-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-513593-m03: (1.021518873s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.77s)

                                                
                                    
x
+
TestPreload (274.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-263005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0116 02:58:32.387830  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 02:59:55.435646  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-263005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m0.413521886s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-263005 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-263005 image pull gcr.io/k8s-minikube/busybox: (1.041268339s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-263005
E0116 03:00:04.673921  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 03:01:12.689298  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-263005: (1m31.564950593s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-263005 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-263005 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (59.996084681s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-263005 image list
helpers_test.go:175: Cleaning up "test-preload-263005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-263005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-263005: (1.12186732s)
--- PASS: TestPreload (274.39s)

                                                
                                    
x
+
TestScheduledStopUnix (121.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-382551 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-382551 --memory=2048 --driver=kvm2  --container-runtime=containerd: (49.10610878s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-382551 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-382551 -n scheduled-stop-382551
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-382551 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-382551 --cancel-scheduled
E0116 03:03:32.389774  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-382551 -n scheduled-stop-382551
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-382551
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-382551 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-382551
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-382551: exit status 7 (89.470313ms)

                                                
                                                
-- stdout --
	scheduled-stop-382551
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-382551 -n scheduled-stop-382551
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-382551 -n scheduled-stop-382551: exit status 7 (90.341201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-382551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-382551
--- PASS: TestScheduledStopUnix (121.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (182.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2821261865 start -p running-upgrade-222038 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2821261865 start -p running-upgrade-222038 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m32.241316705s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-222038 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-222038 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.174463595s)
helpers_test.go:175: Cleaning up "running-upgrade-222038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-222038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-222038: (1.24499193s)
--- PASS: TestRunningBinaryUpgrade (182.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (239.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m55.831718977s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-857752
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-857752: (2.225186209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-857752 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-857752 status --format={{.Host}}: exit status 7 (87.242758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0116 03:10:04.673420  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m22.465344466s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-857752 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (103.943573ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-857752] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-857752
	    minikube start -p kubernetes-upgrade-857752 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8577522 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-857752 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0116 03:11:12.690150  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857752 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (37.198927122s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-857752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-857752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-857752: (1.633265082s)
--- PASS: TestKubernetesUpgrade (239.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (111.884679ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-965722] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (128.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-965722 --driver=kvm2  --container-runtime=containerd
E0116 03:05:04.672931  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-965722 --driver=kvm2  --container-runtime=containerd: (2m8.416560323s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-965722 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (128.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (50.717115789s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-965722 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-965722 status -o json: exit status 2 (297.664591ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-965722","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-965722
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-080519 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-080519 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (131.192458ms)

                                                
                                                
-- stdout --
	* [false-080519] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:07:22.097654  360002 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:07:22.097950  360002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:07:22.097959  360002 out.go:309] Setting ErrFile to fd 2...
	I0116 03:07:22.097964  360002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:07:22.098160  360002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-330687/.minikube/bin
	I0116 03:07:22.099047  360002 out.go:303] Setting JSON to false
	I0116 03:07:22.100485  360002 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":35394,"bootTime":1705339048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:07:22.100579  360002 start.go:138] virtualization: kvm guest
	I0116 03:07:22.103127  360002 out.go:177] * [false-080519] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:07:22.104681  360002 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:07:22.106034  360002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:07:22.104750  360002 notify.go:220] Checking for updates...
	I0116 03:07:22.107587  360002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-330687/kubeconfig
	I0116 03:07:22.109025  360002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-330687/.minikube
	I0116 03:07:22.110463  360002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:07:22.111821  360002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:07:22.113846  360002 config.go:182] Loaded profile config "NoKubernetes-965722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0116 03:07:22.114007  360002 config.go:182] Loaded profile config "cert-expiration-985631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:07:22.114147  360002 config.go:182] Loaded profile config "cert-options-974015": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:07:22.114287  360002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:07:22.154259  360002 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 03:07:22.155759  360002 start.go:298] selected driver: kvm2
	I0116 03:07:22.155779  360002 start.go:902] validating driver "kvm2" against <nil>
	I0116 03:07:22.155791  360002 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:07:22.157912  360002 out.go:177] 
	W0116 03:07:22.159590  360002 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0116 03:07:22.161149  360002 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-080519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.90:8443
name: NoKubernetes-965722
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.85:8443
name: cert-expiration-985631
contexts:
- context:
cluster: NoKubernetes-965722
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-965722
name: NoKubernetes-965722
- context:
cluster: cert-expiration-985631
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-985631
name: cert-expiration-985631
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-965722
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.key
- name: cert-expiration-985631
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-080519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-080519"

                                                
                                                
----------------------- debugLogs end: false-080519 [took: 3.514314223s] --------------------------------
helpers_test.go:175: Cleaning up "false-080519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-080519
--- PASS: TestNetworkPlugins/group/false (3.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-965722 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.676114032s)
--- PASS: TestNoKubernetes/serial/Start (28.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-965722 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-965722 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.110154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-965722
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-965722: (1.306514428s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (73.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-965722 --driver=kvm2  --container-runtime=containerd
E0116 03:08:07.722570  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 03:08:32.388377  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-965722 --driver=kvm2  --container-runtime=containerd: (1m13.365989232s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (73.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-965722 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-965722 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.2942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (159.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2532520829 start -p stopped-upgrade-769224 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2532520829 start -p stopped-upgrade-769224 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (58.958385529s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2532520829 -p stopped-upgrade-769224 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2532520829 -p stopped-upgrade-769224 stop: (2.193862217s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-769224 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-769224 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m38.814355498s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (159.97s)

                                                
                                    
x
+
TestPause/serial/Start (129.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-177097 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-177097 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m9.592629858s)
--- PASS: TestPause/serial/Start (129.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m42.006159888s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-177097 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-177097 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.575363002s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m22.705193248s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-177097 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-177097 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-177097 --output=json --layout=cluster: exit status 2 (303.789017ms)

                                                
                                                
-- stdout --
	{"Name":"pause-177097","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-177097","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-177097 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-177097 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-177097 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-177097 --alsologtostderr -v=5: (1.03103223s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.365707061s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m49.320499905s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-769224
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m37.594874936s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ksgd7" [8d6ea7a7-9dc9-488a-b15b-a5b824cb38bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ksgd7" [8d6ea7a7-9dc9-488a-b15b-a5b824cb38bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004809231s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (107.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m47.42552215s)
--- PASS: TestNetworkPlugins/group/calico/Start (107.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q6m4w" [bb646d1e-c3d4-427d-9d38-256ea67ee4a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007836365s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gkhmj" [eab4559b-1a05-4aa5-9ff3-99663d8f273e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gkhmj" [eab4559b-1a05-4aa5-9ff3-99663d8f273e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004963769s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m20.547892275s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qkt2r" [257689d4-253e-4675-bf84-b0d81261c305] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qkt2r" [257689d4-253e-4675-bf84-b0d81261c305] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007629744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xhtsm" [842c5d8d-cae4-4c96-8af7-be457958e529] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xhtsm" [842c5d8d-cae4-4c96-8af7-be457958e529] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010319738s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0116 03:14:15.737489  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-080519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m35.154002535s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (176.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-818779 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-818779 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m56.690244832s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (176.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9txtq" [80af9dc1-090b-4266-bc6c-e03332c123cd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006509502s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2xwj9" [2748438e-9a91-45aa-907f-bcd2ccd9ee70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2xwj9" [2748438e-9a91-45aa-907f-bcd2ccd9ee70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006876766s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7tcjf" [0d400d72-a034-47a2-9481-dd94c34befb0] Running
E0116 03:15:04.673497  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006008312s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q6zqx" [d8c797b2-2d01-4b43-b4b3-3cdfce3b7b37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q6zqx" [d8c797b2-2d01-4b43-b4b3-3cdfce3b7b37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005233879s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-411416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-411416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (2m16.099495131s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-347810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-347810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m24.353357115s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-080519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-080519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4j2rj" [6a2937ec-4a70-44df-ac62-e03a142f0b58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4j2rj" [6a2937ec-4a70-44df-ac62-e03a142f0b58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005421482s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-080519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-080519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)
E0116 03:23:50.669167  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:24:09.900453  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:24:18.353592  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:24:33.375279  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:24:47.723018  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-070791 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:16:35.435863  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-070791 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m47.178299857s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-347810 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [43290732-d481-47bc-b1a5-42564cec5b55] Pending
helpers_test.go:344: "busybox" [43290732-d481-47bc-b1a5-42564cec5b55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [43290732-d481-47bc-b1a5-42564cec5b55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005385147s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-347810 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-347810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-347810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.356511872s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-347810 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-347810 --alsologtostderr -v=3
E0116 03:17:16.161046  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.166959  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.177328  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.197720  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.238074  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.318505  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.478904  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:16.799354  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:17.440206  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-347810 --alsologtostderr -v=3: (1m31.822803199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-818779 create -f testdata/busybox.yaml
E0116 03:17:18.720978  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19e9177d-993f-42e5-bbed-2be1a5548322] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0116 03:17:21.281598  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
helpers_test.go:344: "busybox" [19e9177d-993f-42e5-bbed-2be1a5548322] Running
E0116 03:17:26.402220  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003804874s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-818779 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-411416 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8cd0f2f2-bf41-4aac-ae4a-d52baf243744] Pending
helpers_test.go:344: "busybox" [8cd0f2f2-bf41-4aac-ae4a-d52baf243744] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8cd0f2f2-bf41-4aac-ae4a-d52baf243744] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004993682s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-411416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-818779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-818779 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-818779 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-818779 --alsologtostderr -v=3: (1m32.101965458s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-411416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-411416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031113104s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-411416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-411416 --alsologtostderr -v=3
E0116 03:17:36.643412  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:17:57.124164  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:18:02.588707  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.594052  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.604390  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.624722  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.665072  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.745676  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:02.906222  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:03.226832  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:03.867945  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-411416 --alsologtostderr -v=3: (1m31.868116851s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-070791 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [42e1054a-f4e0-4045-b0db-7222766df4de] Pending
E0116 03:18:05.148140  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
helpers_test.go:344: "busybox" [42e1054a-f4e0-4045-b0db-7222766df4de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [42e1054a-f4e0-4045-b0db-7222766df4de] Running
E0116 03:18:07.708779  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004138214s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-070791 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-070791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0116 03:18:12.829449  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-070791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.261546202s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-070791 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-070791 --alsologtostderr -v=3
E0116 03:18:23.070667  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:18:32.387949  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 03:18:38.084391  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:18:42.214918  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.220215  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.231208  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.251534  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.291889  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.372258  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.532924  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:42.853255  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:43.494026  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:43.551257  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-070791 --alsologtostderr -v=3: (1m31.851438559s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-347810 -n embed-certs-347810
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-347810 -n embed-certs-347810: exit status 7 (79.828701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-347810 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (580.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-347810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:18:44.774916  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:47.335551  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:50.668530  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.673836  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.684141  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.704498  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.744867  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.825250  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:50.985774  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:51.306445  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:51.947453  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:52.456065  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:18:53.228032  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:18:55.788995  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-347810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (9m39.986056401s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-347810 -n embed-certs-347810
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (580.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818779 -n old-k8s-version-818779
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818779 -n old-k8s-version-818779: exit status 7 (85.672355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-818779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (102.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-818779 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0116 03:19:00.909877  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:19:02.697179  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-818779 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m41.973498439s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818779 -n old-k8s-version-818779
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (102.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-411416 -n no-preload-411416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-411416 -n no-preload-411416: exit status 7 (87.445778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-411416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (350.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-411416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:19:11.151047  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:19:23.178093  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:19:24.511766  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:19:31.631790  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:19:33.376024  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.381352  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.392170  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.412532  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.453227  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.533669  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:33.694666  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:34.014924  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:34.656124  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:35.936925  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:38.497722  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:43.618370  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-411416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m50.559394494s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-411416 -n no-preload-411416
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (350.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791: exit status 7 (93.040538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-070791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-070791 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:19:53.859575  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:19:59.738887  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:19:59.744231  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:19:59.754603  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:19:59.774933  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:19:59.815361  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:19:59.896381  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:00.004620  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:20:00.056959  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:00.377248  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:01.018382  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:02.298858  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:04.139214  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:20:04.673722  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
E0116 03:20:04.860062  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:09.981054  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:12.592842  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:20:14.340482  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:20:20.221706  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:20:40.702352  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-070791 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m35.955267371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (41.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:20:46.432548  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:20:46.708284  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:46.713619  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:46.723951  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:46.744972  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:46.785293  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:46.865700  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:47.026240  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:47.346922  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:47.988056  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:49.268938  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:51.829149  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:20:55.300671  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:20:56.950115  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxm9q" [976a5fdb-6821-4bd5-8665-13220bd42262] Pending
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxm9q" [976a5fdb-6821-4bd5-8665-13220bd42262] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0116 03:21:07.190701  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:21:12.689863  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/functional-084612/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxm9q" [976a5fdb-6821-4bd5-8665-13220bd42262] Running
E0116 03:21:21.663171  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 41.006328078s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (41.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxm9q" [976a5fdb-6821-4bd5-8665-13220bd42262] Running
E0116 03:21:26.060141  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
E0116 03:21:27.671727  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004322082s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-818779 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-818779 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-818779 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818779 -n old-k8s-version-818779
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818779 -n old-k8s-version-818779: exit status 2 (279.972787ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818779 -n old-k8s-version-818779
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818779 -n old-k8s-version-818779: exit status 2 (283.225522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-818779 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818779 -n old-k8s-version-818779
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818779 -n old-k8s-version-818779
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-435935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:21:34.513145  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/enable-default-cni-080519/client.crt: no such file or directory
E0116 03:22:08.632899  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:22:16.161209  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
E0116 03:22:17.221248  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:22:18.938884  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:18.944250  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:18.954610  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:18.974980  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:19.015327  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:19.095632  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:19.255930  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:19.576071  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:20.216950  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:21.497274  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:24.058500  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:29.179736  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-435935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m2.986706734s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-435935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-435935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.43246921s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-435935 --alsologtostderr -v=3
E0116 03:22:39.420561  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:22:43.583683  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:22:43.845057  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/auto-080519/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-435935 --alsologtostderr -v=3: (12.148226652s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-435935 -n newest-cni-435935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-435935 -n newest-cni-435935: exit status 7 (93.288186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-435935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-435935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:22:59.901263  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:23:02.588129  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:23:30.273256  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/flannel-080519/client.crt: no such file or directory
E0116 03:23:30.553427  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
E0116 03:23:32.388007  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
E0116 03:23:40.861889  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-435935 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (51.226511521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-435935 -n newest-cni-435935
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-435935 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-435935 --alsologtostderr -v=1
E0116 03:23:42.214446  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/bridge-080519/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-435935 -n newest-cni-435935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-435935 -n newest-cni-435935: exit status 2 (295.093663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-435935 -n newest-cni-435935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-435935 -n newest-cni-435935: exit status 2 (288.776147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-435935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-435935 -n newest-cni-435935
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-435935 -n newest-cni-435935
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fs7gs" [d76d78e7-bfec-4610-9b2e-303eb29a2467] Running
E0116 03:24:59.738994  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
E0116 03:25:01.062109  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/calico-080519/client.crt: no such file or directory
E0116 03:25:02.782446  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/old-k8s-version-818779/client.crt: no such file or directory
E0116 03:25:04.673111  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/addons-133977/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005730315s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fs7gs" [d76d78e7-bfec-4610-9b2e-303eb29a2467] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004871071s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-411416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-411416 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-411416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-411416 -n no-preload-411416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-411416 -n no-preload-411416: exit status 2 (292.642153ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-411416 -n no-preload-411416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-411416 -n no-preload-411416: exit status 2 (271.045884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-411416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-411416 -n no-preload-411416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-411416 -n no-preload-411416
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsrvw" [adac8a0b-d8f9-476b-98dc-273ae06c6f05] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0116 03:25:27.424330  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/kindnet-080519/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsrvw" [adac8a0b-d8f9-476b-98dc-273ae06c6f05] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005350518s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsrvw" [adac8a0b-d8f9-476b-98dc-273ae06c6f05] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005713719s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-070791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-070791 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-070791 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791: exit status 2 (270.502719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791: exit status 2 (281.961154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-070791 --alsologtostderr -v=1
E0116 03:25:46.708534  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/custom-flannel-080519/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-070791 -n default-k8s-diff-port-070791
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qrzz" [9061a8b8-1b18-4c8d-b68e-8997931618eb] Running
E0116 03:28:25.076504  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/default-k8s-diff-port-070791/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005319652s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qrzz" [9061a8b8-1b18-4c8d-b68e-8997931618eb] Running
E0116 03:28:32.387893  337873 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/ingress-addon-legacy-699477/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00636106s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-347810 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-347810 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-347810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-347810 -n embed-certs-347810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-347810 -n embed-certs-347810: exit status 2 (271.71824ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-347810 -n embed-certs-347810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-347810 -n embed-certs-347810: exit status 2 (267.824011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-347810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-347810 -n embed-certs-347810
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-347810 -n embed-certs-347810
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.80s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
249 TestNetworkPlugins/group/kubenet 6.58
257 TestNetworkPlugins/group/cilium 4.27
273 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-080519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.90:8443
name: NoKubernetes-965722
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.85:8443
name: cert-expiration-985631
contexts:
- context:
cluster: NoKubernetes-965722
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-965722
name: NoKubernetes-965722
- context:
cluster: cert-expiration-985631
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-985631
name: cert-expiration-985631
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-965722
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.key
- name: cert-expiration-985631
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-080519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-080519"

                                                
                                                
----------------------- debugLogs end: kubenet-080519 [took: 6.095994649s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-080519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-080519
--- SKIP: TestNetworkPlugins/group/kubenet (6.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-080519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-080519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.90:8443
name: NoKubernetes-965722
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-330687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.85:8443
name: cert-expiration-985631
contexts:
- context:
cluster: NoKubernetes-965722
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:06:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-965722
name: NoKubernetes-965722
- context:
cluster: cert-expiration-985631
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:05:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-985631
name: cert-expiration-985631
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-965722
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/NoKubernetes-965722/client.key
- name: cert-expiration-985631
user:
client-certificate: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.crt
client-key: /home/jenkins/minikube-integration/17965-330687/.minikube/profiles/cert-expiration-985631/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-080519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-080519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-080519"

                                                
                                                
----------------------- debugLogs end: cilium-080519 [took: 4.108538824s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-080519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-080519
--- SKIP: TestNetworkPlugins/group/cilium (4.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-679990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-679990
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard