Test Report: KVM_Linux_containerd 17644

                    
                      406b3a49e2f2efe39684a1d536accd2e485fd514:2023-11-27:32048
                    
                

Test fail (2/306)

Order failed test Duration
30 TestAddons/parallel/MetricsServer 8.93
52 TestErrorSpam/setup 64.04
x
+
TestAddons/parallel/MetricsServer (8.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.463871ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jlsnc" [f06c0650-0aae-493f-9f6e-e36db5a1ef5b] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017414905s
addons_test.go:414: (dbg) Run:  kubectl --context addons-824928 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-824928 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (374.654046ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:07:32.261446  342582 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:07:32.261632  342582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:07:32.261642  342582 out.go:309] Setting ErrFile to fd 2...
	I1127 11:07:32.261650  342582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:07:32.261874  342582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:07:32.262180  342582 mustload.go:65] Loading cluster: addons-824928
	I1127 11:07:32.262562  342582 config.go:182] Loaded profile config "addons-824928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:07:32.262610  342582 addons.go:594] checking whether the cluster is paused
	I1127 11:07:32.262723  342582 config.go:182] Loaded profile config "addons-824928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:07:32.262740  342582 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:07:32.263153  342582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:07:32.263219  342582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:07:32.277655  342582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1127 11:07:32.278122  342582 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:07:32.278728  342582 main.go:141] libmachine: Using API Version  1
	I1127 11:07:32.278757  342582 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:07:32.279114  342582 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:07:32.279313  342582 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:07:32.280800  342582 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:07:32.281013  342582 ssh_runner.go:195] Run: systemctl --version
	I1127 11:07:32.281043  342582 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:07:32.283372  342582 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:07:32.283761  342582 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:07:32.283802  342582 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:07:32.283937  342582 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:07:32.284120  342582 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:07:32.284271  342582 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:07:32.284407  342582 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:07:32.386288  342582 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1127 11:07:32.386367  342582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 11:07:32.459391  342582 cri.go:89] found id: "35b9253df5c32a41a08bb76a3dc0060ab9baef7e977e9a23b1a1c517358f334f"
	I1127 11:07:32.459421  342582 cri.go:89] found id: "2714c9394e895c685c9ca455704dbcda21f539db6dd5ebd80ecd88af18b78dac"
	I1127 11:07:32.459428  342582 cri.go:89] found id: "db143249e8e54e1827b0cfbb900101198795c3cc8bddbf07f4fc7535a72f7d80"
	I1127 11:07:32.459434  342582 cri.go:89] found id: "5f08fe5f73cfee9006ec79833a56f19e357ae710009e7d6d68c6adb90e50f8e5"
	I1127 11:07:32.459439  342582 cri.go:89] found id: "70c981ff5faa667fe2d2e786c889f92a8ce4000f95c16559e795ee4219cfed5b"
	I1127 11:07:32.459449  342582 cri.go:89] found id: "87d930f74cfce0f03283b3fc14d5e44b3af6c8459a795f9cc24a80c1159c087c"
	I1127 11:07:32.459455  342582 cri.go:89] found id: "bb98e5e93f36261144c26fc95dd6ff57793f3cd3e5bd407a12e16c6882f96bbf"
	I1127 11:07:32.459460  342582 cri.go:89] found id: "269e194a702daa61c26896ba5eddfb265490fa3bdb78ea62681c4dd40b710e14"
	I1127 11:07:32.459464  342582 cri.go:89] found id: "14c19a3162fbf7e0d8f8beba19fde30811dfc20653d4af46aadf027e74967a5f"
	I1127 11:07:32.459478  342582 cri.go:89] found id: "a844b810da0a1938fbd121289c1b2856283b411a218507739ab65513cfbe2bdb"
	I1127 11:07:32.459484  342582 cri.go:89] found id: "14551e6fb54f8605b158b54ad85fe0d6ca36f6f56c921b07e5afb45fca78c830"
	I1127 11:07:32.459489  342582 cri.go:89] found id: "4977f1df62b0aca8899f73ba20bd9c65b47fba3a98dc5e502ebfd272bb5509af"
	I1127 11:07:32.459495  342582 cri.go:89] found id: "db2e6b43a3ecdc35896741455df085be7428d3a0cbff5afd3267b3a92a686bcd"
	I1127 11:07:32.459503  342582 cri.go:89] found id: "dab7e41866c4736450e155ea74df75a7e3160690cd70ed701019033e0eacdd20"
	I1127 11:07:32.459512  342582 cri.go:89] found id: "6cd1fe5e1a3947a01e59e80c75bd23f6e8aceac22fae18cf8a637a7d1f1acc7a"
	I1127 11:07:32.459517  342582 cri.go:89] found id: "b857e9fe5d74ac1583f97f400bfd2aef259887bccfef09092911fb9c39d5ca31"
	I1127 11:07:32.459523  342582 cri.go:89] found id: "db0fc1fb6d6ec49a4bb0b7134f596c2ab435ccde9f4107725eab928aaf2689cf"
	I1127 11:07:32.459531  342582 cri.go:89] found id: "66c492cd46877018dcf78e40a43534287c9b2dea0349391e335f5e2960f01b04"
	I1127 11:07:32.459543  342582 cri.go:89] found id: "2f552f2f47593445e48ed8bc9701d6ca8e86dfb3800273820d05d4818c5a0d35"
	I1127 11:07:32.459549  342582 cri.go:89] found id: "379f4d6edc49a5fa206123dbd4f0a820d4d98136b361ebda9e0c5c0591bfebcb"
	I1127 11:07:32.459553  342582 cri.go:89] found id: ""
	I1127 11:07:32.459616  342582 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1127 11:07:32.562737  342582 main.go:141] libmachine: Making call to close driver server
	I1127 11:07:32.562761  342582 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:07:32.563112  342582 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:07:32.563126  342582 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:07:32.563148  342582 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:07:32.565771  342582 out.go:177] 
	W1127 11:07:32.567450  342582 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-27T11:07:32Z" level=error msg="stat /run/containerd/runc/k8s.io/d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-27T11:07:32Z" level=error msg="stat /run/containerd/runc/k8s.io/d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67: no such file or directory"
	
	W1127 11:07:32.567479  342582 out.go:239] * 
	* 
	W1127 11:07:32.571267  342582 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 11:07:32.572922  342582 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:433: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-824928 addons disable metrics-server --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-824928 -n addons-824928
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-824928 logs -n 25: (2.618643087s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |                     |
	|         | -p download-only-626021              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |                     |
	|         | -p download-only-626021              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC | 27 Nov 23 11:04 UTC |
	| delete  | -p download-only-626021              | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC | 27 Nov 23 11:04 UTC |
	| delete  | -p download-only-626021              | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC | 27 Nov 23 11:04 UTC |
	| start   | --download-only -p                   | binary-mirror-302643 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |                     |
	|         | binary-mirror-302643                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35471               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-302643              | binary-mirror-302643 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC | 27 Nov 23 11:04 UTC |
	| addons  | disable dashboard -p                 | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |                     |
	|         | addons-824928                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |                     |
	|         | addons-824928                        |                      |         |         |                     |                     |
	| start   | -p addons-824928 --wait=true         | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC | 27 Nov 23 11:07 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC | 27 Nov 23 11:07 UTC |
	|         | -p addons-824928                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC | 27 Nov 23 11:07 UTC |
	|         | -p addons-824928                     |                      |         |         |                     |                     |
	| addons  | addons-824928 addons disable         | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC | 27 Nov 23 11:07 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ip      | addons-824928 ip                     | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC | 27 Nov 23 11:07 UTC |
	| addons  | addons-824928 addons disable         | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC | 27 Nov 23 11:07 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-824928 addons                 | addons-824928        | jenkins | v1.32.0 | 27 Nov 23 11:07 UTC |                     |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:04:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:04:36.044424  341436 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:04:36.044595  341436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:36.044605  341436 out.go:309] Setting ErrFile to fd 2...
	I1127 11:04:36.044610  341436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:36.044807  341436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:04:36.045483  341436 out.go:303] Setting JSON to false
	I1127 11:04:36.047098  341436 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6427,"bootTime":1701076649,"procs":910,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:04:36.047203  341436 start.go:138] virtualization: kvm guest
	I1127 11:04:36.049543  341436 out.go:177] * [addons-824928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:04:36.051029  341436 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:04:36.051092  341436 notify.go:220] Checking for updates...
	I1127 11:04:36.052360  341436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:04:36.053749  341436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:04:36.054991  341436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:04:36.056285  341436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:04:36.057596  341436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:04:36.059018  341436 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:04:36.090349  341436 out.go:177] * Using the kvm2 driver based on user configuration
	I1127 11:04:36.091741  341436 start.go:298] selected driver: kvm2
	I1127 11:04:36.091760  341436 start.go:902] validating driver "kvm2" against <nil>
	I1127 11:04:36.091771  341436 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:04:36.092458  341436 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:36.092552  341436 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-333834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 11:04:36.107170  341436 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 11:04:36.107225  341436 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:04:36.107431  341436 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 11:04:36.107503  341436 cni.go:84] Creating CNI manager for ""
	I1127 11:04:36.107518  341436 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1127 11:04:36.107539  341436 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 11:04:36.107548  341436 start_flags.go:323] config:
	{Name:addons-824928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-824928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:36.107668  341436 iso.go:125] acquiring lock: {Name:mkc3926f78de4c185660124f00819d5068cd8c03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:36.110358  341436 out.go:177] * Starting control plane node addons-824928 in cluster addons-824928
	I1127 11:04:36.111773  341436 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1127 11:04:36.111805  341436 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1127 11:04:36.111815  341436 cache.go:56] Caching tarball of preloaded images
	I1127 11:04:36.111909  341436 preload.go:174] Found /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1127 11:04:36.111919  341436 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1127 11:04:36.112253  341436 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/config.json ...
	I1127 11:04:36.112276  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/config.json: {Name:mk0bda98ff4b6f58a73ab8cc1aea494dc7272893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:04:36.112406  341436 start.go:365] acquiring machines lock for addons-824928: {Name:mk58a20e711e16db038a8eb7b7f8cc30c0987a39 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 11:04:36.112449  341436 start.go:369] acquired machines lock for "addons-824928" in 31.311µs
	I1127 11:04:36.112473  341436 start.go:93] Provisioning new machine with config: &{Name:addons-824928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-824928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1127 11:04:36.112552  341436 start.go:125] createHost starting for "" (driver="kvm2")
	I1127 11:04:36.114317  341436 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1127 11:04:36.114449  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:04:36.114493  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:04:36.129291  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I1127 11:04:36.129717  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:04:36.130306  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:04:36.130330  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:04:36.130706  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:04:36.130904  341436 main.go:141] libmachine: (addons-824928) Calling .GetMachineName
	I1127 11:04:36.131054  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:04:36.131205  341436 start.go:159] libmachine.API.Create for "addons-824928" (driver="kvm2")
	I1127 11:04:36.131244  341436 client.go:168] LocalClient.Create starting
	I1127 11:04:36.131284  341436 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem
	I1127 11:04:36.461344  341436 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/cert.pem
	I1127 11:04:36.549146  341436 main.go:141] libmachine: Running pre-create checks...
	I1127 11:04:36.549175  341436 main.go:141] libmachine: (addons-824928) Calling .PreCreateCheck
	I1127 11:04:36.549768  341436 main.go:141] libmachine: (addons-824928) Calling .GetConfigRaw
	I1127 11:04:36.550421  341436 main.go:141] libmachine: Creating machine...
	I1127 11:04:36.550439  341436 main.go:141] libmachine: (addons-824928) Calling .Create
	I1127 11:04:36.550630  341436 main.go:141] libmachine: (addons-824928) Creating KVM machine...
	I1127 11:04:36.551960  341436 main.go:141] libmachine: (addons-824928) DBG | found existing default KVM network
	I1127 11:04:36.552768  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:36.552590  341458 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147910}
	I1127 11:04:36.558289  341436 main.go:141] libmachine: (addons-824928) DBG | trying to create private KVM network mk-addons-824928 192.168.39.0/24...
	I1127 11:04:36.630599  341436 main.go:141] libmachine: (addons-824928) DBG | private KVM network mk-addons-824928 192.168.39.0/24 created
	I1127 11:04:36.630646  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:36.630504  341458 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:04:36.630668  341436 main.go:141] libmachine: (addons-824928) Setting up store path in /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928 ...
	I1127 11:04:36.630700  341436 main.go:141] libmachine: (addons-824928) Building disk image from file:///home/jenkins/minikube-integration/17644-333834/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1127 11:04:36.630725  341436 main.go:141] libmachine: (addons-824928) Downloading /home/jenkins/minikube-integration/17644-333834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17644-333834/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1127 11:04:36.858851  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:36.858660  341458 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa...
	I1127 11:04:37.012728  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:37.012530  341458 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/addons-824928.rawdisk...
	I1127 11:04:37.012766  341436 main.go:141] libmachine: (addons-824928) DBG | Writing magic tar header
	I1127 11:04:37.012782  341436 main.go:141] libmachine: (addons-824928) DBG | Writing SSH key tar header
	I1127 11:04:37.012797  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:37.012687  341458 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928 ...
	I1127 11:04:37.012819  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928
	I1127 11:04:37.012838  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928 (perms=drwx------)
	I1127 11:04:37.012856  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17644-333834/.minikube/machines
	I1127 11:04:37.012880  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins/minikube-integration/17644-333834/.minikube/machines (perms=drwxr-xr-x)
	I1127 11:04:37.012893  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins/minikube-integration/17644-333834/.minikube (perms=drwxr-xr-x)
	I1127 11:04:37.012907  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:04:37.012927  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17644-333834
	I1127 11:04:37.012945  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1127 11:04:37.012957  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins/minikube-integration/17644-333834 (perms=drwxrwxr-x)
	I1127 11:04:37.012968  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1127 11:04:37.012975  341436 main.go:141] libmachine: (addons-824928) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1127 11:04:37.012983  341436 main.go:141] libmachine: (addons-824928) Creating domain...
	I1127 11:04:37.012988  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home/jenkins
	I1127 11:04:37.012996  341436 main.go:141] libmachine: (addons-824928) DBG | Checking permissions on dir: /home
	I1127 11:04:37.013001  341436 main.go:141] libmachine: (addons-824928) DBG | Skipping /home - not owner
	I1127 11:04:37.014113  341436 main.go:141] libmachine: (addons-824928) define libvirt domain using xml: 
	I1127 11:04:37.014139  341436 main.go:141] libmachine: (addons-824928) <domain type='kvm'>
	I1127 11:04:37.014149  341436 main.go:141] libmachine: (addons-824928)   <name>addons-824928</name>
	I1127 11:04:37.014160  341436 main.go:141] libmachine: (addons-824928)   <memory unit='MiB'>4000</memory>
	I1127 11:04:37.014170  341436 main.go:141] libmachine: (addons-824928)   <vcpu>2</vcpu>
	I1127 11:04:37.014185  341436 main.go:141] libmachine: (addons-824928)   <features>
	I1127 11:04:37.014197  341436 main.go:141] libmachine: (addons-824928)     <acpi/>
	I1127 11:04:37.014205  341436 main.go:141] libmachine: (addons-824928)     <apic/>
	I1127 11:04:37.014211  341436 main.go:141] libmachine: (addons-824928)     <pae/>
	I1127 11:04:37.014218  341436 main.go:141] libmachine: (addons-824928)     
	I1127 11:04:37.014223  341436 main.go:141] libmachine: (addons-824928)   </features>
	I1127 11:04:37.014231  341436 main.go:141] libmachine: (addons-824928)   <cpu mode='host-passthrough'>
	I1127 11:04:37.014236  341436 main.go:141] libmachine: (addons-824928)   
	I1127 11:04:37.014247  341436 main.go:141] libmachine: (addons-824928)   </cpu>
	I1127 11:04:37.014274  341436 main.go:141] libmachine: (addons-824928)   <os>
	I1127 11:04:37.014303  341436 main.go:141] libmachine: (addons-824928)     <type>hvm</type>
	I1127 11:04:37.014317  341436 main.go:141] libmachine: (addons-824928)     <boot dev='cdrom'/>
	I1127 11:04:37.014329  341436 main.go:141] libmachine: (addons-824928)     <boot dev='hd'/>
	I1127 11:04:37.014342  341436 main.go:141] libmachine: (addons-824928)     <bootmenu enable='no'/>
	I1127 11:04:37.014362  341436 main.go:141] libmachine: (addons-824928)   </os>
	I1127 11:04:37.014388  341436 main.go:141] libmachine: (addons-824928)   <devices>
	I1127 11:04:37.014409  341436 main.go:141] libmachine: (addons-824928)     <disk type='file' device='cdrom'>
	I1127 11:04:37.014420  341436 main.go:141] libmachine: (addons-824928)       <source file='/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/boot2docker.iso'/>
	I1127 11:04:37.014435  341436 main.go:141] libmachine: (addons-824928)       <target dev='hdc' bus='scsi'/>
	I1127 11:04:37.014446  341436 main.go:141] libmachine: (addons-824928)       <readonly/>
	I1127 11:04:37.014453  341436 main.go:141] libmachine: (addons-824928)     </disk>
	I1127 11:04:37.014462  341436 main.go:141] libmachine: (addons-824928)     <disk type='file' device='disk'>
	I1127 11:04:37.014471  341436 main.go:141] libmachine: (addons-824928)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1127 11:04:37.014481  341436 main.go:141] libmachine: (addons-824928)       <source file='/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/addons-824928.rawdisk'/>
	I1127 11:04:37.014489  341436 main.go:141] libmachine: (addons-824928)       <target dev='hda' bus='virtio'/>
	I1127 11:04:37.014498  341436 main.go:141] libmachine: (addons-824928)     </disk>
	I1127 11:04:37.014505  341436 main.go:141] libmachine: (addons-824928)     <interface type='network'>
	I1127 11:04:37.014616  341436 main.go:141] libmachine: (addons-824928)       <source network='mk-addons-824928'/>
	I1127 11:04:37.014652  341436 main.go:141] libmachine: (addons-824928)       <model type='virtio'/>
	I1127 11:04:37.014667  341436 main.go:141] libmachine: (addons-824928)     </interface>
	I1127 11:04:37.014680  341436 main.go:141] libmachine: (addons-824928)     <interface type='network'>
	I1127 11:04:37.014695  341436 main.go:141] libmachine: (addons-824928)       <source network='default'/>
	I1127 11:04:37.014706  341436 main.go:141] libmachine: (addons-824928)       <model type='virtio'/>
	I1127 11:04:37.014734  341436 main.go:141] libmachine: (addons-824928)     </interface>
	I1127 11:04:37.014754  341436 main.go:141] libmachine: (addons-824928)     <serial type='pty'>
	I1127 11:04:37.014764  341436 main.go:141] libmachine: (addons-824928)       <target port='0'/>
	I1127 11:04:37.014772  341436 main.go:141] libmachine: (addons-824928)     </serial>
	I1127 11:04:37.014789  341436 main.go:141] libmachine: (addons-824928)     <console type='pty'>
	I1127 11:04:37.014798  341436 main.go:141] libmachine: (addons-824928)       <target type='serial' port='0'/>
	I1127 11:04:37.014806  341436 main.go:141] libmachine: (addons-824928)     </console>
	I1127 11:04:37.014812  341436 main.go:141] libmachine: (addons-824928)     <rng model='virtio'>
	I1127 11:04:37.014836  341436 main.go:141] libmachine: (addons-824928)       <backend model='random'>/dev/random</backend>
	I1127 11:04:37.014854  341436 main.go:141] libmachine: (addons-824928)     </rng>
	I1127 11:04:37.014870  341436 main.go:141] libmachine: (addons-824928)     
	I1127 11:04:37.014887  341436 main.go:141] libmachine: (addons-824928)     
	I1127 11:04:37.014901  341436 main.go:141] libmachine: (addons-824928)   </devices>
	I1127 11:04:37.014912  341436 main.go:141] libmachine: (addons-824928) </domain>
	I1127 11:04:37.014927  341436 main.go:141] libmachine: (addons-824928) 
	I1127 11:04:37.020444  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:6c:6a:b8 in network default
	I1127 11:04:37.022293  341436 main.go:141] libmachine: (addons-824928) Ensuring networks are active...
	I1127 11:04:37.022319  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:37.023048  341436 main.go:141] libmachine: (addons-824928) Ensuring network default is active
	I1127 11:04:37.023296  341436 main.go:141] libmachine: (addons-824928) Ensuring network mk-addons-824928 is active
	I1127 11:04:37.023715  341436 main.go:141] libmachine: (addons-824928) Getting domain xml...
	I1127 11:04:37.024243  341436 main.go:141] libmachine: (addons-824928) Creating domain...
	I1127 11:04:38.464978  341436 main.go:141] libmachine: (addons-824928) Waiting to get IP...
	I1127 11:04:38.465695  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:38.466040  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:38.466074  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:38.466015  341458 retry.go:31] will retry after 302.155716ms: waiting for machine to come up
	I1127 11:04:38.769560  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:38.769933  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:38.769956  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:38.769886  341458 retry.go:31] will retry after 240.614406ms: waiting for machine to come up
	I1127 11:04:39.012504  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:39.012929  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:39.012954  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:39.012913  341458 retry.go:31] will retry after 322.516387ms: waiting for machine to come up
	I1127 11:04:39.337652  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:39.338038  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:39.338062  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:39.337980  341458 retry.go:31] will retry after 429.581507ms: waiting for machine to come up
	I1127 11:04:39.769691  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:39.770159  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:39.770194  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:39.770105  341458 retry.go:31] will retry after 611.250178ms: waiting for machine to come up
	I1127 11:04:40.383574  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:40.383986  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:40.384025  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:40.383945  341458 retry.go:31] will retry after 939.138906ms: waiting for machine to come up
	I1127 11:04:41.324760  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:41.325179  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:41.325212  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:41.325117  341458 retry.go:31] will retry after 907.230341ms: waiting for machine to come up
	I1127 11:04:42.234429  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:42.234888  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:42.234915  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:42.234810  341458 retry.go:31] will retry after 949.317196ms: waiting for machine to come up
	I1127 11:04:43.186353  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:43.186854  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:43.186884  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:43.186795  341458 retry.go:31] will retry after 1.603364543s: waiting for machine to come up
	I1127 11:04:44.792865  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:44.793242  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:44.793268  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:44.793211  341458 retry.go:31] will retry after 1.699808521s: waiting for machine to come up
	I1127 11:04:46.494154  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:46.494716  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:46.494751  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:46.494634  341458 retry.go:31] will retry after 2.275926044s: waiting for machine to come up
	I1127 11:04:48.773556  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:48.774128  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:48.774159  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:48.774090  341458 retry.go:31] will retry after 3.162584212s: waiting for machine to come up
	I1127 11:04:51.937896  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:51.938431  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:51.938461  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:51.938374  341458 retry.go:31] will retry after 2.998419899s: waiting for machine to come up
	I1127 11:04:54.940838  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:04:54.941367  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find current IP address of domain addons-824928 in network mk-addons-824928
	I1127 11:04:54.941395  341436 main.go:141] libmachine: (addons-824928) DBG | I1127 11:04:54.941295  341458 retry.go:31] will retry after 5.057748535s: waiting for machine to come up
	I1127 11:05:00.003223  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.003753  341436 main.go:141] libmachine: (addons-824928) Found IP for machine: 192.168.39.110
	I1127 11:05:00.003778  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has current primary IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.003788  341436 main.go:141] libmachine: (addons-824928) Reserving static IP address...
	I1127 11:05:00.004121  341436 main.go:141] libmachine: (addons-824928) DBG | unable to find host DHCP lease matching {name: "addons-824928", mac: "52:54:00:e0:9d:93", ip: "192.168.39.110"} in network mk-addons-824928
	I1127 11:05:00.077528  341436 main.go:141] libmachine: (addons-824928) DBG | Getting to WaitForSSH function...
	I1127 11:05:00.077554  341436 main.go:141] libmachine: (addons-824928) Reserved static IP address: 192.168.39.110
	I1127 11:05:00.077611  341436 main.go:141] libmachine: (addons-824928) Waiting for SSH to be available...
	I1127 11:05:00.080275  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.080669  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.080698  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.080859  341436 main.go:141] libmachine: (addons-824928) DBG | Using SSH client type: external
	I1127 11:05:00.080927  341436 main.go:141] libmachine: (addons-824928) DBG | Using SSH private key: /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa (-rw-------)
	I1127 11:05:00.081002  341436 main.go:141] libmachine: (addons-824928) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 11:05:00.081032  341436 main.go:141] libmachine: (addons-824928) DBG | About to run SSH command:
	I1127 11:05:00.081049  341436 main.go:141] libmachine: (addons-824928) DBG | exit 0
	I1127 11:05:00.174487  341436 main.go:141] libmachine: (addons-824928) DBG | SSH cmd err, output: <nil>: 
	I1127 11:05:00.174745  341436 main.go:141] libmachine: (addons-824928) KVM machine creation complete!
	I1127 11:05:00.175078  341436 main.go:141] libmachine: (addons-824928) Calling .GetConfigRaw
	I1127 11:05:00.175640  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:00.175844  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:00.176010  341436 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1127 11:05:00.176028  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:00.177239  341436 main.go:141] libmachine: Detecting operating system of created instance...
	I1127 11:05:00.177258  341436 main.go:141] libmachine: Waiting for SSH to be available...
	I1127 11:05:00.177265  341436 main.go:141] libmachine: Getting to WaitForSSH function...
	I1127 11:05:00.177272  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:00.179613  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.179957  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.179991  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.180076  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:00.180280  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.180563  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.180714  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:00.180876  341436 main.go:141] libmachine: Using SSH client type: native
	I1127 11:05:00.181255  341436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1127 11:05:00.181268  341436 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1127 11:05:00.289868  341436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:05:00.289900  341436 main.go:141] libmachine: Detecting the provisioner...
	I1127 11:05:00.289913  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:00.292614  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.292911  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.292958  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.293082  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:00.293298  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.293511  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.293687  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:00.293864  341436 main.go:141] libmachine: Using SSH client type: native
	I1127 11:05:00.294182  341436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1127 11:05:00.294194  341436 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1127 11:05:00.407631  341436 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1127 11:05:00.407756  341436 main.go:141] libmachine: found compatible host: buildroot
	I1127 11:05:00.407773  341436 main.go:141] libmachine: Provisioning with buildroot...
	I1127 11:05:00.407787  341436 main.go:141] libmachine: (addons-824928) Calling .GetMachineName
	I1127 11:05:00.408053  341436 buildroot.go:166] provisioning hostname "addons-824928"
	I1127 11:05:00.408079  341436 main.go:141] libmachine: (addons-824928) Calling .GetMachineName
	I1127 11:05:00.408284  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:00.410939  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.411413  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.411442  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.411636  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:00.411848  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.412046  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.412195  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:00.412358  341436 main.go:141] libmachine: Using SSH client type: native
	I1127 11:05:00.412669  341436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1127 11:05:00.412683  341436 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-824928 && echo "addons-824928" | sudo tee /etc/hostname
	I1127 11:05:00.539813  341436 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-824928
	
	I1127 11:05:00.539860  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:00.542656  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.542973  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.543071  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.543239  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:00.543444  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.543603  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:00.543797  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:00.544004  341436 main.go:141] libmachine: Using SSH client type: native
	I1127 11:05:00.544365  341436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1127 11:05:00.544390  341436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-824928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-824928/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-824928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:05:00.663150  341436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:05:00.663186  341436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17644-333834/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-333834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-333834/.minikube}
	I1127 11:05:00.663211  341436 buildroot.go:174] setting up certificates
	I1127 11:05:00.663240  341436 provision.go:83] configureAuth start
	I1127 11:05:00.663260  341436 main.go:141] libmachine: (addons-824928) Calling .GetMachineName
	I1127 11:05:00.663602  341436 main.go:141] libmachine: (addons-824928) Calling .GetIP
	I1127 11:05:00.666119  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.666499  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.666524  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.666664  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:00.668848  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.669102  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:00.669131  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:00.669245  341436 provision.go:138] copyHostCerts
	I1127 11:05:00.669333  341436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-333834/.minikube/ca.pem (1078 bytes)
	I1127 11:05:00.669502  341436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-333834/.minikube/cert.pem (1123 bytes)
	I1127 11:05:00.669604  341436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-333834/.minikube/key.pem (1675 bytes)
	I1127 11:05:00.669727  341436 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-333834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca-key.pem org=jenkins.addons-824928 san=[192.168.39.110 192.168.39.110 localhost 127.0.0.1 minikube addons-824928]
	I1127 11:05:01.070498  341436 provision.go:172] copyRemoteCerts
	I1127 11:05:01.070582  341436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:05:01.070632  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:01.073201  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.073533  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.073578  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.073695  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:01.073921  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.074074  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:01.074324  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:01.160635  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:05:01.183481  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 11:05:01.205195  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 11:05:01.228240  341436 provision.go:86] duration metric: configureAuth took 564.98152ms
	I1127 11:05:01.228279  341436 buildroot.go:189] setting minikube options for container-runtime
	I1127 11:05:01.228467  341436 config.go:182] Loaded profile config "addons-824928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:05:01.228522  341436 main.go:141] libmachine: Checking connection to Docker...
	I1127 11:05:01.228541  341436 main.go:141] libmachine: (addons-824928) Calling .GetURL
	I1127 11:05:01.229751  341436 main.go:141] libmachine: (addons-824928) DBG | Using libvirt version 6000000
	I1127 11:05:01.231825  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.232238  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.232271  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.232473  341436 main.go:141] libmachine: Docker is up and running!
	I1127 11:05:01.232494  341436 main.go:141] libmachine: Reticulating splines...
	I1127 11:05:01.232502  341436 client.go:171] LocalClient.Create took 25.101247595s
	I1127 11:05:01.232521  341436 start.go:167] duration metric: libmachine.API.Create for "addons-824928" took 25.101318845s
	I1127 11:05:01.232530  341436 start.go:300] post-start starting for "addons-824928" (driver="kvm2")
	I1127 11:05:01.232540  341436 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:05:01.232557  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:01.232823  341436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:05:01.232853  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:01.235142  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.235441  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.235473  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.235624  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:01.235795  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.235934  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:01.236079  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:01.321062  341436 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:05:01.325453  341436 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 11:05:01.325484  341436 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-333834/.minikube/addons for local assets ...
	I1127 11:05:01.325570  341436 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-333834/.minikube/files for local assets ...
	I1127 11:05:01.325603  341436 start.go:303] post-start completed in 93.065795ms
	I1127 11:05:01.325681  341436 main.go:141] libmachine: (addons-824928) Calling .GetConfigRaw
	I1127 11:05:01.326372  341436 main.go:141] libmachine: (addons-824928) Calling .GetIP
	I1127 11:05:01.329081  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.329571  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.329607  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.329895  341436 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/config.json ...
	I1127 11:05:01.330145  341436 start.go:128] duration metric: createHost completed in 25.217578933s
	I1127 11:05:01.330212  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:01.332557  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.332858  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.332885  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.333012  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:01.333201  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.333329  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.333514  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:01.333672  341436 main.go:141] libmachine: Using SSH client type: native
	I1127 11:05:01.334066  341436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1127 11:05:01.334081  341436 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 11:05:01.447453  341436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701083101.429286191
	
	I1127 11:05:01.447484  341436 fix.go:206] guest clock: 1701083101.429286191
	I1127 11:05:01.447494  341436 fix.go:219] Guest: 2023-11-27 11:05:01.429286191 +0000 UTC Remote: 2023-11-27 11:05:01.330163554 +0000 UTC m=+25.335488006 (delta=99.122637ms)
	I1127 11:05:01.447523  341436 fix.go:190] guest clock delta is within tolerance: 99.122637ms
	I1127 11:05:01.447556  341436 start.go:83] releasing machines lock for "addons-824928", held for 25.335066275s
	I1127 11:05:01.447594  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:01.447899  341436 main.go:141] libmachine: (addons-824928) Calling .GetIP
	I1127 11:05:01.450813  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.451192  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.451222  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.451375  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:01.451883  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:01.452065  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:01.452161  341436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:05:01.452227  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:01.452316  341436 ssh_runner.go:195] Run: cat /version.json
	I1127 11:05:01.452346  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:01.454763  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.455020  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.455103  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.455134  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.455273  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:01.455363  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:01.455390  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:01.455485  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.455589  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:01.455659  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:01.455739  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:01.455797  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:01.455848  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:01.455975  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:01.536135  341436 ssh_runner.go:195] Run: systemctl --version
	I1127 11:05:01.566149  341436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1127 11:05:01.571940  341436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 11:05:01.572049  341436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:05:01.588508  341436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 11:05:01.588539  341436 start.go:472] detecting cgroup driver to use...
	I1127 11:05:01.588632  341436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1127 11:05:01.625464  341436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 11:05:01.638900  341436 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:05:01.638979  341436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:05:01.651568  341436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:05:01.663726  341436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 11:05:01.763054  341436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:05:01.880985  341436 docker.go:219] disabling docker service ...
	I1127 11:05:01.881079  341436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:05:01.895298  341436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:05:01.907764  341436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:05:02.018321  341436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:05:02.116634  341436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:05:02.129823  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:05:02.146975  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1127 11:05:02.157266  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1127 11:05:02.167721  341436 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1127 11:05:02.167797  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1127 11:05:02.177852  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:05:02.187856  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1127 11:05:02.197784  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:05:02.207779  341436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:05:02.217964  341436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1127 11:05:02.228219  341436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:05:02.237705  341436 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 11:05:02.237780  341436 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1127 11:05:02.251570  341436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:05:02.261203  341436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:05:02.364810  341436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 11:05:02.397039  341436 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1127 11:05:02.397133  341436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1127 11:05:02.402271  341436 retry.go:31] will retry after 796.106166ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1127 11:05:03.199276  341436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1127 11:05:03.204969  341436 start.go:540] Will wait 60s for crictl version
	I1127 11:05:03.205064  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:03.209036  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:05:03.249502  341436 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.9
	RuntimeApiVersion:  v1
	I1127 11:05:03.249594  341436 ssh_runner.go:195] Run: containerd --version
	I1127 11:05:03.277682  341436 ssh_runner.go:195] Run: containerd --version
	I1127 11:05:03.363562  341436 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.9 ...
	I1127 11:05:03.426823  341436 main.go:141] libmachine: (addons-824928) Calling .GetIP
	I1127 11:05:03.429596  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:03.429910  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:03.429931  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:03.430195  341436 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1127 11:05:03.435058  341436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:05:03.447697  341436 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1127 11:05:03.447757  341436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:05:03.489274  341436 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1127 11:05:03.489357  341436 ssh_runner.go:195] Run: which lz4
	I1127 11:05:03.493468  341436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 11:05:03.497900  341436 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 11:05:03.497958  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I1127 11:05:05.288091  341436 containerd.go:547] Took 1.794665 seconds to copy over tarball
	I1127 11:05:05.288193  341436 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 11:05:08.396080  341436 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.107850673s)
	I1127 11:05:08.396112  341436 containerd.go:554] Took 3.107974 seconds to extract the tarball
	I1127 11:05:08.396122  341436 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 11:05:08.437704  341436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:05:08.541639  341436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 11:05:08.565659  341436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:05:08.614361  341436 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.4 registry.k8s.io/kube-controller-manager:v1.28.4 registry.k8s.io/kube-scheduler:v1.28.4 registry.k8s.io/kube-proxy:v1.28.4 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 11:05:08.614477  341436 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.4
	I1127 11:05:08.614559  341436 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1127 11:05:08.614603  341436 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.4
	I1127 11:05:08.614623  341436 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1127 11:05:08.614480  341436 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.4
	I1127 11:05:08.614549  341436 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1127 11:05:08.614490  341436 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:05:08.614555  341436 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.4
	I1127 11:05:08.616330  341436 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.4
	I1127 11:05:08.616407  341436 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:05:08.616472  341436 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.4
	I1127 11:05:08.616545  341436 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1127 11:05:08.616656  341436 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.4
	I1127 11:05:08.616674  341436 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1127 11:05:08.616336  341436 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.4
	I1127 11:05:08.616781  341436 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1127 11:05:08.789683  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 11:05:08.797015  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 11:05:08.800106  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0"
	I1127 11:05:08.804667  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 11:05:08.805196  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	I1127 11:05:08.815429  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 11:05:08.818679  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 11:05:08.850752  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.4"
	I1127 11:05:10.052172  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1": (1.262444572s)
	I1127 11:05:10.052221  341436 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1127 11:05:10.052261  341436 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1127 11:05:10.052314  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.261271  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.4": (1.464218535s)
	I1127 11:05:10.261319  341436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.4" does not exist at hash "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591" in container runtime
	I1127 11:05:10.261349  341436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.4
	I1127 11:05:10.261403  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329303  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0": (1.529153172s)
	I1127 11:05:10.329342  341436 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1127 11:05:10.329371  341436 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1127 11:05:10.329416  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329427  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.4": (1.52473272s)
	I1127 11:05:10.329455  341436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.4" does not exist at hash "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257" in container runtime
	I1127 11:05:10.329480  341436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.4
	I1127 11:05:10.329514  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9": (1.52428563s)
	I1127 11:05:10.329549  341436 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1127 11:05:10.329525  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329599  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.4": (1.514142098s)
	I1127 11:05:10.329619  341436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.4" does not exist at hash "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1" in container runtime
	I1127 11:05:10.329638  341436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.4
	I1127 11:05:10.329579  341436 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1127 11:05:10.329674  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329712  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329780  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5": (1.511077672s)
	I1127 11:05:10.329810  341436 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1127 11:05:10.329819  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.4": (1.479030364s)
	I1127 11:05:10.329832  341436 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:05:10.329840  341436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.4" needs transfer: "registry.k8s.io/kube-proxy:v1.28.4" does not exist at hash "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e" in container runtime
	I1127 11:05:10.329861  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.329911  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1127 11:05:10.329939  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.4
	I1127 11:05:10.329868  341436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.4
	I1127 11:05:10.329976  341436 ssh_runner.go:195] Run: which crictl
	I1127 11:05:10.335118  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1127 11:05:10.348027  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.4
	I1127 11:05:10.348061  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.4
	I1127 11:05:10.348072  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1127 11:05:10.348130  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:05:10.724913  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.4
	I1127 11:05:10.901260  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1127 11:05:10.901322  341436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.4
	I1127 11:05:10.903851  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1127 11:05:10.905204  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1127 11:05:10.905254  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.4
	I1127 11:05:10.905259  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1127 11:05:10.905259  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.4
	I1127 11:05:10.999350  341436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.4
	I1127 11:05:10.999427  341436 cache_images.go:92] LoadImages completed in 2.385037834s
	W1127 11:05:10.999522  341436 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.4: no such file or directory
	I1127 11:05:10.999610  341436 ssh_runner.go:195] Run: sudo crictl info
	I1127 11:05:11.034245  341436 cni.go:84] Creating CNI manager for ""
	I1127 11:05:11.034273  341436 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1127 11:05:11.034295  341436 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 11:05:11.034333  341436 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-824928 NodeName:addons-824928 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 11:05:11.034478  341436 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-824928"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:05:11.034546  341436 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-824928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-824928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:05:11.034710  341436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 11:05:11.043248  341436 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:05:11.043346  341436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 11:05:11.052871  341436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1127 11:05:11.068701  341436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 11:05:11.084163  341436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1127 11:05:11.099801  341436 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I1127 11:05:11.103809  341436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:05:11.116094  341436 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928 for IP: 192.168.39.110
	I1127 11:05:11.116130  341436 certs.go:190] acquiring lock for shared ca certs: {Name:mkc9acb9c1afcf1fdc0c0efadf3ba8980794e061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.116266  341436 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17644-333834/.minikube/ca.key
	I1127 11:05:11.224295  341436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt ...
	I1127 11:05:11.224332  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt: {Name:mk797eed8a81cca5907cef7e013ee28d01a20226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.224553  341436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-333834/.minikube/ca.key ...
	I1127 11:05:11.224569  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/ca.key: {Name:mk06890e74a2fc4426fc0b87609e2c831cf0ff60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.224672  341436 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.key
	I1127 11:05:11.519418  341436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.crt ...
	I1127 11:05:11.519449  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.crt: {Name:mk955a00e6ca48b1df335249ff39fbba6fa27f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.519641  341436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.key ...
	I1127 11:05:11.519678  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.key: {Name:mk636bf32ec7c1dd4dbbea622f76c944ddadee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.519814  341436 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.key
	I1127 11:05:11.519830  341436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt with IP's: []
	I1127 11:05:11.727856  341436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt ...
	I1127 11:05:11.727896  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: {Name:mk01784c395ce86529a322a70e79a2fc5b5f69c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.728125  341436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.key ...
	I1127 11:05:11.728141  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.key: {Name:mk17d2f0de24ea1935fd8c638345819232ea0a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.728236  341436 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key.ea44ecf9
	I1127 11:05:11.728258  341436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt.ea44ecf9 with IP's: [192.168.39.110 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 11:05:11.861289  341436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt.ea44ecf9 ...
	I1127 11:05:11.861322  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt.ea44ecf9: {Name:mkd75a2d94e45a14e6eddcae46935e531269350a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.861533  341436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key.ea44ecf9 ...
	I1127 11:05:11.861555  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key.ea44ecf9: {Name:mk5ed1c2d62ad800ab23708f2eb194298b5ea24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.861649  341436 certs.go:337] copying /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt.ea44ecf9 -> /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt
	I1127 11:05:11.861726  341436 certs.go:341] copying /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key.ea44ecf9 -> /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key
	I1127 11:05:11.861782  341436 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.key
	I1127 11:05:11.861799  341436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.crt with IP's: []
	I1127 11:05:11.980164  341436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.crt ...
	I1127 11:05:11.980197  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.crt: {Name:mk339728d9b9469b6c0f8b1b347e12707b250f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.980383  341436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.key ...
	I1127 11:05:11.980407  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.key: {Name:mk52a6830af5f5fc7e6110da04eadc194b958042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:11.980617  341436 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 11:05:11.980656  341436 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/home/jenkins/minikube-integration/17644-333834/.minikube/certs/ca.pem (1078 bytes)
	I1127 11:05:11.980681  341436 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/home/jenkins/minikube-integration/17644-333834/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:05:11.980714  341436 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-333834/.minikube/certs/home/jenkins/minikube-integration/17644-333834/.minikube/certs/key.pem (1675 bytes)
	I1127 11:05:11.981374  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 11:05:12.005777  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 11:05:12.029377  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 11:05:12.052408  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 11:05:12.076061  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:05:12.098863  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 11:05:12.121913  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:05:12.144706  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:05:12.167740  341436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:05:12.190229  341436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 11:05:12.206066  341436 ssh_runner.go:195] Run: openssl version
	I1127 11:05:12.211734  341436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:05:12.221052  341436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:05:12.225446  341436 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 11:05 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:05:12.225503  341436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:05:12.230971  341436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:05:12.240313  341436 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:05:12.244406  341436 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:05:12.244464  341436 kubeadm.go:404] StartCluster: {Name:addons-824928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-824928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:05:12.244554  341436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1127 11:05:12.244608  341436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 11:05:12.281284  341436 cri.go:89] found id: ""
	I1127 11:05:12.281359  341436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 11:05:12.291132  341436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:05:12.300140  341436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:05:12.308571  341436 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:05:12.308621  341436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1127 11:05:12.359501  341436 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 11:05:12.359616  341436 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:05:12.485293  341436 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:05:12.485460  341436 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:05:12.485601  341436 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:05:26.809913  341436 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:05:26.811557  341436 out.go:204]   - Generating certificates and keys ...
	I1127 11:05:26.811649  341436 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:05:26.811732  341436 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:05:27.058442  341436 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 11:05:27.265264  341436 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 11:05:27.616272  341436 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 11:05:27.858582  341436 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 11:05:27.972060  341436 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 11:05:27.972286  341436 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-824928 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1127 11:05:28.319088  341436 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 11:05:28.319234  341436 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-824928 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1127 11:05:28.393350  341436 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 11:05:28.490085  341436 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 11:05:28.666989  341436 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 11:05:28.667087  341436 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:05:28.967336  341436 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:05:29.037570  341436 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:05:29.360345  341436 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:05:29.499390  341436 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:05:29.500176  341436 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:05:29.504235  341436 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:05:29.506321  341436 out.go:204]   - Booting up control plane ...
	I1127 11:05:29.506454  341436 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:05:29.506627  341436 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:05:29.506733  341436 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:05:29.523624  341436 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:05:29.524685  341436 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:05:29.524785  341436 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 11:05:29.641313  341436 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:05:36.643314  341436 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002525 seconds
	I1127 11:05:36.643567  341436 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:05:36.662068  341436 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:05:37.198988  341436 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:05:37.199233  341436 kubeadm.go:322] [mark-control-plane] Marking the node addons-824928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 11:05:37.714330  341436 kubeadm.go:322] [bootstrap-token] Using token: 1cje1l.nuskn0z4amno77fe
	I1127 11:05:37.715797  341436 out.go:204]   - Configuring RBAC rules ...
	I1127 11:05:37.715960  341436 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:05:37.733427  341436 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:05:37.751301  341436 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:05:37.757509  341436 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:05:37.765575  341436 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:05:37.778112  341436 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:05:37.795761  341436 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:05:38.101848  341436 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:05:38.141509  341436 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:05:38.144453  341436 kubeadm.go:322] 
	I1127 11:05:38.144556  341436 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:05:38.144581  341436 kubeadm.go:322] 
	I1127 11:05:38.144708  341436 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:05:38.144720  341436 kubeadm.go:322] 
	I1127 11:05:38.144741  341436 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:05:38.144803  341436 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:05:38.144891  341436 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:05:38.144908  341436 kubeadm.go:322] 
	I1127 11:05:38.144973  341436 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 11:05:38.144987  341436 kubeadm.go:322] 
	I1127 11:05:38.145087  341436 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 11:05:38.145097  341436 kubeadm.go:322] 
	I1127 11:05:38.145172  341436 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:05:38.145275  341436 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:05:38.145446  341436 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:05:38.145467  341436 kubeadm.go:322] 
	I1127 11:05:38.145581  341436 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:05:38.145685  341436 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:05:38.145695  341436 kubeadm.go:322] 
	I1127 11:05:38.145806  341436 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1cje1l.nuskn0z4amno77fe \
	I1127 11:05:38.145939  341436 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fa32f0024f906811c05f71c3451d36f95543370a69bd2e7473b442defb7ea35 \
	I1127 11:05:38.145964  341436 kubeadm.go:322] 	--control-plane 
	I1127 11:05:38.145973  341436 kubeadm.go:322] 
	I1127 11:05:38.146089  341436 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:05:38.146101  341436 kubeadm.go:322] 
	I1127 11:05:38.146210  341436 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1cje1l.nuskn0z4amno77fe \
	I1127 11:05:38.146358  341436 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fa32f0024f906811c05f71c3451d36f95543370a69bd2e7473b442defb7ea35 
	I1127 11:05:38.147055  341436 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:05:38.147109  341436 cni.go:84] Creating CNI manager for ""
	I1127 11:05:38.147122  341436 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1127 11:05:38.148846  341436 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 11:05:38.150463  341436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 11:05:38.164000  341436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 11:05:38.192352  341436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:05:38.192491  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:38.192537  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=addons-824928 minikube.k8s.io/updated_at=2023_11_27T11_05_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:38.217047  341436 ops.go:34] apiserver oom_adj: -16
	I1127 11:05:38.544200  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:38.661328  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:39.262709  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:39.762736  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:40.262340  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:40.762183  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:41.262093  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:41.762169  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:42.262779  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:42.762654  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:43.262315  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:43.762933  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:44.262239  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:44.762361  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:45.262910  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:45.762663  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:46.262473  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:46.762914  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:47.262248  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:47.762863  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:48.262634  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:48.762922  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:49.262721  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:49.762924  341436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:05:49.893281  341436 kubeadm.go:1081] duration metric: took 11.700875881s to wait for elevateKubeSystemPrivileges.
	I1127 11:05:49.893333  341436 kubeadm.go:406] StartCluster complete in 37.64887264s
	I1127 11:05:49.893359  341436 settings.go:142] acquiring lock: {Name:mke8ec4e1e4529ccfbb1a20855ff925735688605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:49.893533  341436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:05:49.894930  341436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-333834/kubeconfig: {Name:mk26cb3ea255c7ac37cbbb635182a6dedeae9873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:05:49.895266  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:05:49.895432  341436 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 11:05:49.895594  341436 addons.go:69] Setting volumesnapshots=true in profile "addons-824928"
	I1127 11:05:49.895629  341436 addons.go:231] Setting addon volumesnapshots=true in "addons-824928"
	I1127 11:05:49.895659  341436 config.go:182] Loaded profile config "addons-824928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:05:49.895721  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.895725  341436 addons.go:69] Setting ingress-dns=true in profile "addons-824928"
	I1127 11:05:49.895740  341436 addons.go:231] Setting addon ingress-dns=true in "addons-824928"
	I1127 11:05:49.895796  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896112  341436 addons.go:69] Setting default-storageclass=true in profile "addons-824928"
	I1127 11:05:49.896125  341436 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-824928"
	I1127 11:05:49.896135  341436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-824928"
	I1127 11:05:49.896151  341436 addons.go:69] Setting helm-tiller=true in profile "addons-824928"
	I1127 11:05:49.896173  341436 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-824928"
	I1127 11:05:49.896183  341436 addons.go:231] Setting addon helm-tiller=true in "addons-824928"
	I1127 11:05:49.896221  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896239  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896251  341436 addons.go:69] Setting registry=true in profile "addons-824928"
	I1127 11:05:49.896266  341436 addons.go:231] Setting addon registry=true in "addons-824928"
	I1127 11:05:49.896301  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896112  341436 addons.go:69] Setting cloud-spanner=true in profile "addons-824928"
	I1127 11:05:49.896317  341436 addons.go:69] Setting storage-provisioner=true in profile "addons-824928"
	I1127 11:05:49.896325  341436 addons.go:69] Setting ingress=true in profile "addons-824928"
	I1127 11:05:49.896330  341436 addons.go:231] Setting addon storage-provisioner=true in "addons-824928"
	I1127 11:05:49.896308  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896340  341436 addons.go:231] Setting addon ingress=true in "addons-824928"
	I1127 11:05:49.896346  341436 addons.go:231] Setting addon cloud-spanner=true in "addons-824928"
	I1127 11:05:49.896365  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896378  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896395  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.896695  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896703  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896720  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896737  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896739  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896759  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896769  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896782  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896329  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896810  341436 addons.go:69] Setting inspektor-gadget=true in profile "addons-824928"
	I1127 11:05:49.896826  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.896829  341436 addons.go:231] Setting addon inspektor-gadget=true in "addons-824928"
	I1127 11:05:49.896835  341436 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-824928"
	I1127 11:05:49.896798  341436 addons.go:69] Setting gcp-auth=true in profile "addons-824928"
	I1127 11:05:49.896155  341436 addons.go:69] Setting metrics-server=true in profile "addons-824928"
	I1127 11:05:49.896853  341436 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-824928"
	I1127 11:05:49.896859  341436 mustload.go:65] Loading cluster: addons-824928
	I1127 11:05:49.896861  341436 addons.go:231] Setting addon metrics-server=true in "addons-824928"
	I1127 11:05:49.896866  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.896880  341436 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-824928"
	I1127 11:05:49.896892  341436 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-824928"
	I1127 11:05:49.897036  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.897036  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.897091  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.897158  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.897200  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.897541  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.897566  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.897586  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.897874  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.897883  341436 config.go:182] Loaded profile config "addons-824928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:05:49.898147  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.898608  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.899072  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.899090  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.899124  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.899135  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.923292  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1127 11:05:49.923562  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1127 11:05:49.923675  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I1127 11:05:49.923704  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.923710  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I1127 11:05:49.923679  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.923750  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.923754  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I1127 11:05:49.923831  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.923801  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I1127 11:05:49.924464  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.924577  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.924640  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.925773  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.925868  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.926038  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.926052  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.926209  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.926223  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.926358  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.926403  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.926484  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.926898  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.926956  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.927073  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.927086  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.927133  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.927254  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.927266  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.927709  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.927731  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.928145  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.928185  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.928640  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.928697  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.929144  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.929203  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.929243  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.929637  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.929658  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.931253  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.931306  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.931591  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.931626  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.931648  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.931677  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.950478  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I1127 11:05:49.951161  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.951822  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.951851  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.952270  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.952477  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.955713  341436 addons.go:231] Setting addon default-storageclass=true in "addons-824928"
	I1127 11:05:49.955767  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.956233  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.956263  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.956520  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I1127 11:05:49.957026  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.957504  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.957524  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.957945  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.958141  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.959828  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:49.962063  341436 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 11:05:49.960415  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I1127 11:05:49.961518  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1127 11:05:49.965504  341436 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 11:05:49.964036  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I1127 11:05:49.964451  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.964683  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.967135  341436 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 11:05:49.967164  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 11:05:49.967185  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:49.967964  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.967987  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.967970  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.968046  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.968560  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.968563  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.968658  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.969317  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.969367  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.969404  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.969426  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.970126  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.970699  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.970746  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.971453  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:49.973574  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 11:05:49.972413  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I1127 11:05:49.972902  341436 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-824928"
	I1127 11:05:49.973258  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:49.973961  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:49.976267  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 11:05:49.975054  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.975097  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:49.975474  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.976185  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I1127 11:05:49.976365  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:49.976623  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1127 11:05:49.977359  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1127 11:05:49.979539  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 11:05:49.977945  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:49.978253  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.978472  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:49.978484  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.978824  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.978872  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.979075  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.983191  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 11:05:49.981611  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.981651  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.982700  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:49.982635  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.983068  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.983603  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.984528  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.986056  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I1127 11:05:49.986065  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 11:05:49.987451  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 11:05:49.986091  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.986052  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I1127 11:05:49.986168  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.986182  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.986022  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I1127 11:05:49.986474  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.986599  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.988593  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I1127 11:05:49.990343  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 11:05:49.989630  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.989737  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.989759  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.989788  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.989826  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.990015  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.990525  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.990679  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.991692  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 11:05:49.993379  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 11:05:49.993398  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 11:05:49.991718  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.993418  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:49.991756  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.991825  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43739
	I1127 11:05:49.993789  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.991946  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.992410  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.993873  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.992436  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.993917  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.992504  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.994080  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.992677  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.994147  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.994310  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I1127 11:05:49.994489  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.994501  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.994533  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.994709  341436 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-824928" context rescaled to 1 replicas
	I1127 11:05:49.994743  341436 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1127 11:05:49.996248  341436 out.go:177] * Verifying Kubernetes components...
	I1127 11:05:49.994904  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.994927  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.995315  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.995498  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:49.996033  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.996174  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:49.997255  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:49.997642  341436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:05:49.998031  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.998056  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:49.998316  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.998336  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:49.999724  341436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:05:49.998514  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:49.998562  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:49.998993  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:49.999035  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:49.999162  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:49.999680  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:49.999709  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.002623  341436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 11:05:50.001326  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.001344  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.001354  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.002769  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.001399  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:50.002870  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.001567  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.004617  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.004627  341436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:05:50.003065  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.003115  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.004081  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.007168  341436 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 11:05:50.006059  341436 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:05:50.006544  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.008566  341436 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 11:05:50.008600  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I1127 11:05:50.008635  341436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:05:50.008652  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:05:50.008690  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
	I1127 11:05:50.009816  341436 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 11:05:50.011392  341436 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 11:05:50.011411  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 11:05:50.011429  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.013036  341436 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 11:05:50.013057  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 11:05:50.013075  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.009946  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.009981  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 11:05:50.013155  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.010543  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.010547  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.010782  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.015359  341436 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 11:05:50.016703  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 11:05:50.016730  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 11:05:50.016755  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.014628  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.016814  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.015077  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.016861  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.017408  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.017844  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.018242  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:50.018281  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:50.018417  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:50.018458  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:50.018510  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.018535  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.018831  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.018855  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.019067  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.019255  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.019481  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.019635  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.020246  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.022223  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.022497  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.022527  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.022558  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.022576  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.022627  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.022668  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.022685  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.022719  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.023125  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.023168  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.023211  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.023222  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.023554  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.023602  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.023640  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.024155  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.024217  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.024786  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.025307  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.025376  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.025394  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.025763  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.026265  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.026525  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.026753  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.027700  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I1127 11:05:50.029770  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1127 11:05:50.030189  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.030750  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.030768  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.031345  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.031540  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.031904  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.032733  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1127 11:05:50.033008  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.033113  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.033243  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.033320  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I1127 11:05:50.033711  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.033773  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.033817  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.033882  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.033890  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.036010  341436 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1127 11:05:50.034262  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.034288  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.034294  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.036233  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.037761  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.037892  341436 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1127 11:05:50.037908  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1127 11:05:50.037927  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.038963  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.039247  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.039839  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.041692  341436 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 11:05:50.043185  341436 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 11:05:50.043198  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 11:05:50.043212  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.040795  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.041204  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1127 11:05:50.043363  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.042424  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.042665  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.043458  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.043480  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.043513  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.043727  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.045274  341436 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 11:05:50.043918  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.044416  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.046534  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1127 11:05:50.046558  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.046787  341436 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 11:05:50.047224  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.047972  341436 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 11:05:50.049314  341436 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 11:05:50.049329  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 11:05:50.049344  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.048048  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 11:05:50.049387  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.048055  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.049440  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.048283  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.048524  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.048708  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.049521  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.050470  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.050523  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.050606  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.050628  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.050701  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.051084  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.051296  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.051696  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.053560  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.053623  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.054420  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.054347  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.054444  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.054458  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.054635  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.054754  341436 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:05:50.054771  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:05:50.054788  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.054840  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.054954  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.054997  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.055045  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.055534  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I1127 11:05:50.055566  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.055762  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.056008  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.056185  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.057871  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.058244  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.058270  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.058404  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.058563  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.058767  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.058913  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:50.075556  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:50.075988  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:50.076015  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:50.076427  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:50.076613  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:50.078361  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:50.080744  341436 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 11:05:50.082314  341436 out.go:177]   - Using image docker.io/busybox:stable
	I1127 11:05:50.083889  341436 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 11:05:50.083908  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 11:05:50.083929  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:50.087533  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.087987  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:50.088017  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:50.088164  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:50.088332  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:50.088520  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:50.088665  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	W1127 11:05:50.090241  341436 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48764->192.168.39.110:22: read: connection reset by peer
	I1127 11:05:50.090261  341436 retry.go:31] will retry after 224.169421ms: ssh: handshake failed: read tcp 192.168.39.1:48764->192.168.39.110:22: read: connection reset by peer
	I1127 11:05:50.278145  341436 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 11:05:50.278176  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 11:05:50.284793  341436 node_ready.go:35] waiting up to 6m0s for node "addons-824928" to be "Ready" ...
	I1127 11:05:50.285253  341436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:05:50.289451  341436 node_ready.go:49] node "addons-824928" has status "Ready":"True"
	I1127 11:05:50.289477  341436 node_ready.go:38] duration metric: took 4.654175ms waiting for node "addons-824928" to be "Ready" ...
	I1127 11:05:50.289488  341436 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:05:50.297991  341436 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.304356  341436 pod_ready.go:92] pod "etcd-addons-824928" in "kube-system" namespace has status "Ready":"True"
	I1127 11:05:50.304382  341436 pod_ready.go:81] duration metric: took 6.353333ms waiting for pod "etcd-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.304391  341436 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.313537  341436 pod_ready.go:92] pod "kube-apiserver-addons-824928" in "kube-system" namespace has status "Ready":"True"
	I1127 11:05:50.313572  341436 pod_ready.go:81] duration metric: took 9.172908ms waiting for pod "kube-apiserver-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.313587  341436 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.320357  341436 pod_ready.go:92] pod "kube-controller-manager-addons-824928" in "kube-system" namespace has status "Ready":"True"
	I1127 11:05:50.320387  341436 pod_ready.go:81] duration metric: took 6.791552ms waiting for pod "kube-controller-manager-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.320395  341436 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.328571  341436 pod_ready.go:92] pod "kube-scheduler-addons-824928" in "kube-system" namespace has status "Ready":"True"
	I1127 11:05:50.328606  341436 pod_ready.go:81] duration metric: took 8.203114ms waiting for pod "kube-scheduler-addons-824928" in "kube-system" namespace to be "Ready" ...
	I1127 11:05:50.328617  341436 pod_ready.go:38] duration metric: took 39.118386ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:05:50.328640  341436 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:05:50.328718  341436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:05:50.430414  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 11:05:50.430453  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 11:05:50.446851  341436 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 11:05:50.446881  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 11:05:50.482864  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 11:05:50.482892  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 11:05:50.547146  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 11:05:50.560327  341436 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 11:05:50.560355  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 11:05:50.603783  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:05:50.615830  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:05:50.669528  341436 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1127 11:05:50.669559  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1127 11:05:50.671321  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 11:05:50.697917  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 11:05:50.701898  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 11:05:50.714805  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 11:05:50.732935  341436 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 11:05:50.732963  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1127 11:05:50.734896  341436 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 11:05:50.734916  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 11:05:50.911879  341436 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 11:05:50.911904  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 11:05:50.993122  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 11:05:50.993151  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 11:05:51.095426  341436 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 11:05:51.095455  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 11:05:51.158935  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 11:05:51.214898  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 11:05:51.282083  341436 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 11:05:51.282110  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 11:05:51.303880  341436 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 11:05:51.303910  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 11:05:51.351919  341436 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 11:05:51.351948  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 11:05:51.591057  341436 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 11:05:51.591091  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 11:05:51.620478  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 11:05:51.620502  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 11:05:51.750047  341436 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:05:51.750071  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 11:05:51.777048  341436 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 11:05:51.777080  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 11:05:51.964203  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 11:05:51.964233  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 11:05:51.992710  341436 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 11:05:51.992738  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 11:05:52.030381  341436 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 11:05:52.030409  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 11:05:52.109562  341436 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:05:52.109596  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 11:05:52.228746  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:05:52.460102  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 11:05:52.460129  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 11:05:52.488118  341436 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 11:05:52.488156  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 11:05:52.578691  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:05:52.849376  341436 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 11:05:52.849413  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 11:05:52.924357  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 11:05:52.924388  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 11:05:53.341027  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 11:05:53.435582  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 11:05:53.435609  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 11:05:53.670819  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 11:05:53.670855  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 11:05:53.808744  341436 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 11:05:53.808778  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 11:05:54.028329  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 11:05:54.930272  341436 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.644980077s)
	I1127 11:05:54.930310  341436 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1127 11:05:54.930332  341436 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.601589761s)
	I1127 11:05:54.930369  341436 api_server.go:72] duration metric: took 4.935596579s to wait for apiserver process to appear ...
	I1127 11:05:54.930384  341436 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:05:54.930404  341436 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1127 11:05:54.935795  341436 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1127 11:05:54.937003  341436 api_server.go:141] control plane version: v1.28.4
	I1127 11:05:54.937027  341436 api_server.go:131] duration metric: took 6.6347ms to wait for apiserver health ...
	I1127 11:05:54.937037  341436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:05:54.943275  341436 system_pods.go:59] 7 kube-system pods found
	I1127 11:05:54.943311  341436 system_pods.go:61] "coredns-5dd5756b68-dnbtk" [81bdabe4-dc5c-47d5-9d04-fb52c1f4dac6] Running
	I1127 11:05:54.943318  341436 system_pods.go:61] "etcd-addons-824928" [67a0ca1b-c54f-4715-ad11-dbddd3360efd] Running
	I1127 11:05:54.943324  341436 system_pods.go:61] "kube-apiserver-addons-824928" [92393bd3-5295-49ee-8323-bb23a7bab86c] Running
	I1127 11:05:54.943330  341436 system_pods.go:61] "kube-controller-manager-addons-824928" [75f67233-f8d4-46ca-bec1-37b0edfa1864] Running
	I1127 11:05:54.943337  341436 system_pods.go:61] "kube-proxy-8tjwr" [1903ce28-5c46-4219-a105-26444313bab2] Running
	I1127 11:05:54.943343  341436 system_pods.go:61] "kube-scheduler-addons-824928" [c19b4a75-4ccb-41d8-a50d-f8abce92a629] Running
	I1127 11:05:54.943356  341436 system_pods.go:61] "nvidia-device-plugin-daemonset-r5lrw" [de9a71c2-da70-468d-b3ff-5f1197f11582] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1127 11:05:54.943371  341436 system_pods.go:74] duration metric: took 6.326084ms to wait for pod list to return data ...
	I1127 11:05:54.943389  341436 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:05:54.945711  341436 default_sa.go:45] found service account: "default"
	I1127 11:05:54.945735  341436 default_sa.go:55] duration metric: took 2.337643ms for default service account to be created ...
	I1127 11:05:54.945746  341436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:05:54.951934  341436 system_pods.go:86] 7 kube-system pods found
	I1127 11:05:54.951976  341436 system_pods.go:89] "coredns-5dd5756b68-dnbtk" [81bdabe4-dc5c-47d5-9d04-fb52c1f4dac6] Running
	I1127 11:05:54.951985  341436 system_pods.go:89] "etcd-addons-824928" [67a0ca1b-c54f-4715-ad11-dbddd3360efd] Running
	I1127 11:05:54.951994  341436 system_pods.go:89] "kube-apiserver-addons-824928" [92393bd3-5295-49ee-8323-bb23a7bab86c] Running
	I1127 11:05:54.952001  341436 system_pods.go:89] "kube-controller-manager-addons-824928" [75f67233-f8d4-46ca-bec1-37b0edfa1864] Running
	I1127 11:05:54.952007  341436 system_pods.go:89] "kube-proxy-8tjwr" [1903ce28-5c46-4219-a105-26444313bab2] Running
	I1127 11:05:54.952018  341436 system_pods.go:89] "kube-scheduler-addons-824928" [c19b4a75-4ccb-41d8-a50d-f8abce92a629] Running
	I1127 11:05:54.952032  341436 system_pods.go:89] "nvidia-device-plugin-daemonset-r5lrw" [de9a71c2-da70-468d-b3ff-5f1197f11582] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1127 11:05:54.952048  341436 system_pods.go:126] duration metric: took 6.294682ms to wait for k8s-apps to be running ...
	I1127 11:05:54.952065  341436 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:05:54.952133  341436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:05:56.684979  341436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 11:05:56.685035  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:56.689000  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:56.689581  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:56.689623  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:56.689849  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:56.690097  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:56.690335  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:56.690516  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:05:57.527378  341436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 11:05:57.857158  341436 addons.go:231] Setting addon gcp-auth=true in "addons-824928"
	I1127 11:05:57.857234  341436 host.go:66] Checking if "addons-824928" exists ...
	I1127 11:05:57.857692  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:57.857734  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:57.874463  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I1127 11:05:57.874953  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:57.875450  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:57.875487  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:57.875957  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:57.876503  341436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:05:57.876537  341436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:05:57.892415  341436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38939
	I1127 11:05:57.892862  341436 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:05:57.893351  341436 main.go:141] libmachine: Using API Version  1
	I1127 11:05:57.893376  341436 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:05:57.893788  341436 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:05:57.894002  341436 main.go:141] libmachine: (addons-824928) Calling .GetState
	I1127 11:05:57.895793  341436 main.go:141] libmachine: (addons-824928) Calling .DriverName
	I1127 11:05:57.896108  341436 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 11:05:57.896131  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHHostname
	I1127 11:05:57.899298  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:57.899692  341436 main.go:141] libmachine: (addons-824928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:9d:93", ip: ""} in network mk-addons-824928: {Iface:virbr1 ExpiryTime:2023-11-27 12:04:52 +0000 UTC Type:0 Mac:52:54:00:e0:9d:93 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-824928 Clientid:01:52:54:00:e0:9d:93}
	I1127 11:05:57.899714  341436 main.go:141] libmachine: (addons-824928) DBG | domain addons-824928 has defined IP address 192.168.39.110 and MAC address 52:54:00:e0:9d:93 in network mk-addons-824928
	I1127 11:05:57.899904  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHPort
	I1127 11:05:57.900085  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHKeyPath
	I1127 11:05:57.900266  341436 main.go:141] libmachine: (addons-824928) Calling .GetSSHUsername
	I1127 11:05:57.900427  341436 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/addons-824928/id_rsa Username:docker}
	I1127 11:06:00.986970  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.383135103s)
	I1127 11:06:00.987044  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987063  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987062  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.371188705s)
	I1127 11:06:00.987098  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.439916264s)
	I1127 11:06:00.987135  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987106  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987151  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987166  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987196  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.315846219s)
	I1127 11:06:00.987227  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.289283087s)
	I1127 11:06:00.987261  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987273  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987274  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.285351302s)
	I1127 11:06:00.987294  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987305  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987233  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987326  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.272480507s)
	I1127 11:06:00.987332  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987356  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987371  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987390  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.828424401s)
	I1127 11:06:00.987413  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987425  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987507  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.772576281s)
	I1127 11:06:00.987519  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987529  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987620  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.758841667s)
	I1127 11:06:00.987637  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987647  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.987811  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.409081755s)
	W1127 11:06:00.987838  341436 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 11:06:00.987870  341436 retry.go:31] will retry after 211.425858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 11:06:00.987940  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.646877781s)
	I1127 11:06:00.987958  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.987969  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.988173  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.988210  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.988219  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.988231  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.988239  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.988288  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.988306  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.988325  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.988333  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.988341  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.988349  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.988393  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.988412  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.988420  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.988428  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.988435  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.988473  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.988493  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.988501  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.988510  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.988518  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.988638  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.988648  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.988658  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.988665  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.989453  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.989486  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.989494  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.989503  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.989511  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.989579  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.989602  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.989610  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.989618  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.989627  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.990386  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.990435  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.990445  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.991924  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.991953  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.991965  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.992136  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992160  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992188  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992197  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.992207  341436 addons.go:467] Verifying addon ingress=true in "addons-824928"
	I1127 11:06:00.992253  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992269  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.993975  341436 out.go:177] * Verifying ingress addon...
	I1127 11:06:00.992321  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992341  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992363  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992415  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992514  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992574  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992593  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992609  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.992622  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992635  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992657  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.992659  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.995416  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995432  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995442  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995432  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995456  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995457  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.995472  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995480  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.995483  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.995489  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.995493  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.995498  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.995545  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.995469  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:00.995592  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:00.996435  341436 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 11:06:00.997373  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.997374  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.997404  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.997415  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.997418  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.997424  341436 addons.go:467] Verifying addon metrics-server=true in "addons-824928"
	I1127 11:06:00.997427  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.997470  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:00.997496  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:00.997504  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:00.997514  341436 addons.go:467] Verifying addon registry=true in "addons-824928"
	I1127 11:06:00.999132  341436 out.go:177] * Verifying registry addon...
	I1127 11:06:00.997597  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:01.000679  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:01.001323  341436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1127 11:06:01.005268  341436 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 11:06:01.005293  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:01.026110  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:01.026136  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:01.026454  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:01.026475  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:01.026494  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:01.027144  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:01.027168  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:01.027186  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:01.027529  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:01.027547  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	W1127 11:06:01.027667  341436 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1127 11:06:01.029331  341436 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 11:06:01.029348  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:01.033065  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:01.199812  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:06:01.531730  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:01.538848  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:02.033104  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:02.043804  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:02.531863  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:02.542442  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:03.031404  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:03.038847  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:03.511828  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.483432095s)
	I1127 11:06:03.511898  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:03.511846  341436 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.559683466s)
	I1127 11:06:03.511912  341436 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.61578427s)
	I1127 11:06:03.511963  341436 system_svc.go:56] duration metric: took 8.559888094s WaitForService to wait for kubelet.
	I1127 11:06:03.511913  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:03.511992  341436 kubeadm.go:581] duration metric: took 13.517210242s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:06:03.512039  341436 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:06:03.513986  341436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:06:03.512348  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:03.512381  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:03.515526  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:03.517042  341436 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 11:06:03.515546  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:03.518467  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:03.518523  341436 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 11:06:03.518545  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 11:06:03.518782  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:03.518824  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:03.518834  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:03.518846  341436 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-824928"
	I1127 11:06:03.520335  341436 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 11:06:03.522623  341436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 11:06:03.525662  341436 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:06:03.525695  341436 node_conditions.go:123] node cpu capacity is 2
	I1127 11:06:03.525709  341436 node_conditions.go:105] duration metric: took 13.664287ms to run NodePressure ...
	I1127 11:06:03.525723  341436 start.go:228] waiting for startup goroutines ...
	I1127 11:06:03.533785  341436 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 11:06:03.533804  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:03.539068  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:03.542633  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:03.544869  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:03.577515  341436 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 11:06:03.577545  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 11:06:03.681579  341436 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 11:06:03.681608  341436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 11:06:03.796390  341436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 11:06:04.034839  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:04.038656  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:04.046947  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:04.110190  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.910317139s)
	I1127 11:06:04.110262  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:04.110283  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:04.110599  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:04.110641  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:04.110662  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:04.110679  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:04.111039  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:04.111076  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:04.111089  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:04.533826  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:04.541194  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:04.552181  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:05.032521  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:05.038641  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:05.047951  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:05.536008  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:05.542102  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:05.549395  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:05.863313  341436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.066863331s)
	I1127 11:06:05.863377  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:05.863396  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:05.863774  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:05.863797  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:05.863808  341436 main.go:141] libmachine: Making call to close driver server
	I1127 11:06:05.863817  341436 main.go:141] libmachine: (addons-824928) Calling .Close
	I1127 11:06:05.863858  341436 main.go:141] libmachine: (addons-824928) DBG | Closing plugin on server side
	I1127 11:06:05.864085  341436 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:06:05.864099  341436 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:06:05.865273  341436 addons.go:467] Verifying addon gcp-auth=true in "addons-824928"
	I1127 11:06:05.867165  341436 out.go:177] * Verifying gcp-auth addon...
	I1127 11:06:05.869397  341436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 11:06:05.873152  341436 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 11:06:05.873174  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:05.876128  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:06.031734  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:06.042379  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:06.058452  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:06.381087  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:06.533469  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:06.537603  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:06.558752  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:06.880984  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:07.032491  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:07.037731  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:07.048154  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:07.380667  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:07.532325  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:07.537810  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:07.548268  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:07.882969  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:08.034528  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:08.041240  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:08.048728  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:08.380646  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:08.532276  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:08.537529  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:08.547714  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:08.880309  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:09.032482  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:09.040935  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:09.049546  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:09.381019  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:09.534789  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:09.537862  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:09.549472  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:09.946812  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:10.033133  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:10.037220  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:10.047842  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:10.380357  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:10.534610  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:10.542097  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:10.552949  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:10.880835  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:11.032783  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:11.038481  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:11.049159  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:11.380450  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:11.532501  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:11.538326  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:11.552114  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:11.880686  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:12.032443  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:12.038114  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:12.049305  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:12.380577  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:12.533269  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:12.537383  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:12.549946  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:12.880417  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:13.032897  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:13.037894  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:13.049042  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:13.381198  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:13.533668  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:13.539250  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:13.549492  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:13.880633  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:14.031611  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:14.038266  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:14.048893  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:14.379908  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:14.532184  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:14.537607  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:14.548046  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:14.881660  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:15.032760  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:15.037443  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:15.048060  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:15.380780  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:15.782995  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:15.784187  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:15.786871  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:15.881616  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:16.032675  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:16.040365  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:16.048717  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:16.381618  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:16.534929  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:16.540101  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:16.550608  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:16.883344  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:17.032693  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:17.039863  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:17.061844  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:17.380402  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:17.532982  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:17.537160  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:17.547702  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:17.880252  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:18.033076  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:18.037792  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:18.048501  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:18.380474  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:18.531899  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:18.537427  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:18.548398  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:18.880482  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:19.031766  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:19.037686  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:19.048035  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:19.380191  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:19.533159  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:19.537071  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:19.547230  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:19.880191  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:20.032899  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:20.037355  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:20.052706  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:20.380342  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:20.532348  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:20.537408  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:20.548179  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:20.880494  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:21.032546  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:21.038544  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:21.048974  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:21.380747  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:21.532390  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:21.538398  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:21.548219  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:21.881360  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:22.036157  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:22.044779  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:22.049120  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:22.383566  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:22.533583  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:22.538417  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:22.548630  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:22.880850  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:23.032851  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:23.039438  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:23.048124  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:23.379966  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:23.538272  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:23.548732  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:23.557192  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:23.880050  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:24.033885  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:24.038333  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:24.047820  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:24.380870  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:24.538496  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:24.539159  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:24.555290  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:24.882246  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:25.032247  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:25.039094  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:25.048382  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:25.380154  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:25.532797  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:25.538103  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:25.549234  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:25.879950  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:26.033988  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:26.036962  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:26.053264  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:26.380257  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:26.534158  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:26.541829  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:26.548536  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:27.372661  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:27.374686  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:27.380117  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:27.380160  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:27.384410  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:27.531746  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:27.537059  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:27.548753  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:27.880773  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:28.032654  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:28.040213  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:28.051423  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:28.380254  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:28.532744  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:28.542141  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:28.547488  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:28.880826  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:29.033008  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:29.038159  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:29.048089  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:29.381086  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:29.533770  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:29.537943  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:29.548485  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:29.880995  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:30.032842  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:30.039224  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:30.058927  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:30.381084  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:30.532531  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:30.542111  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:06:30.548033  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:30.882964  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:31.034376  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:31.039091  341436 kapi.go:107] duration metric: took 30.037762058s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 11:06:31.049102  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:31.382417  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:31.659214  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:31.662532  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:31.881568  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:32.033165  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:32.048424  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:32.380867  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:32.532320  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:32.549685  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:32.880799  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:33.032302  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:33.048783  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:33.380933  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:33.533214  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:33.550050  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:33.882130  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:34.033221  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:34.050907  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:34.380912  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:34.532593  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:34.555466  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:34.885212  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:35.032164  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:35.049646  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:35.380567  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:35.532392  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:35.547662  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:35.880903  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:36.032821  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:36.048068  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:36.380647  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:36.534482  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:36.550344  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:36.880733  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:37.032130  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:37.049231  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:37.382108  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:37.534015  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:37.564453  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:37.881701  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:38.032738  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:38.048029  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:38.382874  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:38.532530  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:38.548599  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:38.881309  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:39.033956  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:39.050801  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:39.400044  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:39.532696  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:39.547920  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:39.888139  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:40.037626  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:40.050172  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:40.381484  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:40.532145  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:40.549095  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:40.995009  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:41.215617  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:41.217990  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:41.379882  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:41.532752  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:41.548408  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:41.882107  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:42.033160  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:42.048825  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:42.379773  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:42.533042  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:42.549223  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:42.885176  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:43.034038  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:43.049213  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:43.380647  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:43.532429  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:43.550305  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:43.892985  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:44.032697  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:44.048824  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:44.393593  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:44.536146  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:44.551039  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:44.881413  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:45.032397  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:45.143433  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:45.381996  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:45.532797  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:45.549730  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:45.883686  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:46.032746  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:46.051468  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:46.388349  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:46.535077  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:46.549695  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:46.881005  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:47.033057  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:47.049392  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:47.447828  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:47.532341  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:47.548800  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:47.881290  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:48.033827  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:48.048570  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:48.383018  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:48.534754  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:48.548646  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:48.880520  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:49.031948  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:49.049820  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:49.390169  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:49.532586  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:49.548779  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:49.880357  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:50.033782  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:50.049695  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:50.381770  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:50.533710  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:50.549628  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:50.879664  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:51.032195  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:51.050941  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:51.381501  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:51.532343  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:51.548960  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:51.880072  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:52.034605  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:52.054575  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:52.381247  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:52.533452  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:52.549667  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:06:52.883482  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:53.031879  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:53.051821  341436 kapi.go:107] duration metric: took 49.52919569s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 11:06:53.380172  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:53.533665  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:53.881288  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:54.033284  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:54.383378  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:54.535322  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:54.881110  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:55.032228  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:55.380163  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:55.533258  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:55.880843  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:56.032310  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:56.380286  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:56.533081  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:56.880989  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:57.033439  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:57.380956  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:57.534130  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:57.880862  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:58.032683  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:58.380526  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:58.531922  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:58.881201  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:59.033085  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:59.380785  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:06:59.533286  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:06:59.880474  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:00.035427  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:00.381589  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:00.532075  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:00.881542  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:01.032173  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:01.380314  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:01.534951  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:01.880263  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:02.033424  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:02.380877  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:02.532538  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:02.881462  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:03.032211  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:03.380745  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:03.532850  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:03.880059  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:04.032776  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:04.380200  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:04.533277  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:04.881359  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:05.032260  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:05.380635  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:05.532847  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:05.880765  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:06.033769  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:06.381700  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:06.532051  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:06.882411  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:07.032197  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:07.380402  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:07.531879  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:07.880331  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:08.031877  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:08.383227  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:08.534554  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:08.882110  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:09.034748  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:09.381303  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:09.533290  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:10.002484  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:10.039572  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:10.380685  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:10.532126  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:10.880558  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:11.032967  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:11.379943  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:11.539340  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:11.881842  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:12.035131  341436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:07:12.381063  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:12.535179  341436 kapi.go:107] duration metric: took 1m11.538736161s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 11:07:12.881002  341436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:07:13.380313  341436 kapi.go:107] duration metric: took 1m7.510913824s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 11:07:13.382488  341436 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-824928 cluster.
	I1127 11:07:13.384049  341436 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 11:07:13.385719  341436 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 11:07:13.387348  341436 out.go:177] * Enabled addons: helm-tiller, cloud-spanner, ingress-dns, nvidia-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1127 11:07:13.388830  341436 addons.go:502] enable addons completed in 1m23.493406095s: enabled=[helm-tiller cloud-spanner ingress-dns nvidia-device-plugin metrics-server storage-provisioner inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1127 11:07:13.388881  341436 start.go:233] waiting for cluster config update ...
	I1127 11:07:13.388900  341436 start.go:242] writing updated cluster config ...
	I1127 11:07:13.389176  341436 ssh_runner.go:195] Run: rm -f paused
	I1127 11:07:13.443165  341436 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:07:13.445112  341436 out.go:177] * Done! kubectl is now configured to use "addons-824928" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e9c8065639c8b       a416a98b71e22       3 seconds ago        Exited              busybox                                  0                   d9ac13a7f865f       test-local-path
	342e7322bfdae       b135667c98980       3 seconds ago        Running             nginx                                    0                   7672bfd0b7f65       nginx
	c57d08fd12f69       beae173ccac6a       8 seconds ago        Exited              registry-test                            0                   510a7cf61e142       registry-test
	b79a892439ce8       a416a98b71e22       9 seconds ago        Exited              helper-pod                               0                   e976a65f29fe8       helper-pod-create-pvc-3ce46789-1491-423b-b053-97f61c7a2812
	35b9253df5c32       98f6c3b32d565       10 seconds ago       Exited              helm-test                                0                   773c768fa3d6c       helm-test
	f385c1a35258e       b2e369e632bea       12 seconds ago       Running             headlamp                                 0                   d404f781111d1       headlamp-777fd4b855-htlfd
	a482f158fb19d       6d2a98b274382       21 seconds ago       Running             gcp-auth                                 0                   1cf26f15f688d       gcp-auth-d4c87556c-kwhbd
	97128d8d1373d       5aa0bf4798fa2       22 seconds ago       Running             controller                               0                   9ce847e1bc998       ingress-nginx-controller-7c6974c4d8-6l2vv
	2714c9394e895       738351fd438f0       40 seconds ago       Running             csi-snapshotter                          0                   243e89a75cd82       csi-hostpathplugin-msc8b
	db143249e8e54       931dbfd16f87c       42 seconds ago       Running             csi-provisioner                          0                   243e89a75cd82       csi-hostpathplugin-msc8b
	5f08fe5f73cfe       e899260153aed       43 seconds ago       Running             liveness-probe                           0                   243e89a75cd82       csi-hostpathplugin-msc8b
	70c981ff5faa6       e255e073c508c       45 seconds ago       Running             hostpath                                 0                   243e89a75cd82       csi-hostpathplugin-msc8b
	87d930f74cfce       88ef14a257f42       46 seconds ago       Running             node-driver-registrar                    0                   243e89a75cd82       csi-hostpathplugin-msc8b
	a30014f73b405       1ebff0f9671bc       47 seconds ago       Exited              patch                                    0                   4a6ba9fcb3648       gcp-auth-certs-patch-pclwc
	495c95a4b172a       1ebff0f9671bc       47 seconds ago       Exited              create                                   0                   55552209f560b       gcp-auth-certs-create-4922j
	bb98e5e93f362       19a639eda60f0       47 seconds ago       Running             csi-resizer                              0                   98aff19811bb2       csi-hostpath-resizer-0
	269e194a702da       a1ed5895ba635       49 seconds ago       Running             csi-external-health-monitor-controller   0                   243e89a75cd82       csi-hostpathplugin-msc8b
	20b478aecd380       1ebff0f9671bc       51 seconds ago       Exited              patch                                    0                   a0dca6128bdd8       ingress-nginx-admission-patch-xrsln
	14c19a3162fbf       59cbb42146a37       51 seconds ago       Running             csi-attacher                             0                   b413596d1db0c       csi-hostpath-attacher-0
	d9fc606b6a8b0       1ebff0f9671bc       52 seconds ago       Exited              create                                   0                   ff30f25f86840       ingress-nginx-admission-create-jx2xm
	a844b810da0a1       aa61ee9c70bc4       53 seconds ago       Running             volume-snapshot-controller               0                   4d113c1809b3a       snapshot-controller-58dbcc7b99-l7srk
	14551e6fb54f8       aa61ee9c70bc4       54 seconds ago       Running             volume-snapshot-controller               0                   948d897df3110       snapshot-controller-58dbcc7b99-7vwrh
	5ac90ebae2bc6       e16d1e3a10667       55 seconds ago       Running             local-path-provisioner                   0                   a548725ba279a       local-path-provisioner-78b46b4d5c-dl2rp
	70f010bc9f518       e872064a4c61f       57 seconds ago       Running             gadget                                   0                   21eb65cb46862       gadget-wv867
	4977f1df62b0a       a608c686bac93       About a minute ago   Running             metrics-server                           0                   13b192d62235e       metrics-server-7c66d45ddc-jlsnc
	6e39d87619e30       e41cf323c46dd       About a minute ago   Running             cloud-spanner-emulator                   0                   329bc562ea99f       cloud-spanner-emulator-5649c69bf6-d8fxb
	db2e6b43a3ecd       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   e3f42db84e9c3       storage-provisioner
	dab7e41866c47       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   285d6cf7e3e9e       kube-ingress-dns-minikube
	6cd1fe5e1a394       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   a07cb7fea9af0       coredns-5dd5756b68-dnbtk
	b857e9fe5d74a       83f6cc407eed8       About a minute ago   Running             kube-proxy                               0                   1adc753bcf03f       kube-proxy-8tjwr
	db0fc1fb6d6ec       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler                           0                   537436b3a15e4       kube-scheduler-addons-824928
	66c492cd46877       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   931c803cef8e2       etcd-addons-824928
	2f552f2f47593       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager                  0                   7a45faf709b6f       kube-controller-manager-addons-824928
	379f4d6edc49a       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver                           0                   9a10dcf1268d2       kube-apiserver-addons-824928
	
	* 
	* ==> containerd <==
	* -- Journal begins at Mon 2023-11-27 11:04:49 UTC, ends at Mon 2023-11-27 11:07:33 UTC. --
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.370222125Z" level=info msg="CreateContainer within sandbox \"7672bfd0b7f65c8077e035e392eef517c4cda22586b0e494e463350dd044e117\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"342e7322bfdae559382bed4a54e207854d77b984040815c3020c41cebf33e549\""
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.371403743Z" level=info msg="StartContainer for \"342e7322bfdae559382bed4a54e207854d77b984040815c3020c41cebf33e549\""
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.508545068Z" level=info msg="StartContainer for \"342e7322bfdae559382bed4a54e207854d77b984040815c3020c41cebf33e549\" returns successfully"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.616078401Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.636896394Z" level=info msg="ImageCreate event name:\"docker.io/library/busybox:stable\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.638975295Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=4399"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.641033248Z" level=info msg="ImageUpdate event name:\"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.643870591Z" level=info msg="ImageUpdate event name:\"docker.io/library/busybox:stable\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.647000280Z" level=info msg="ImageUpdate event name:\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.647953129Z" level=info msg="Pulled image \"busybox:stable\" with image id \"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\", repo tag \"docker.io/library/busybox:stable\", repo digest \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\", size \"2224229\" in 356.95601ms"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.648038879Z" level=info msg="PullImage \"busybox:stable\" returns image reference \"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\""
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.654697543Z" level=info msg="CreateContainer within sandbox \"d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.685519950Z" level=info msg="CreateContainer within sandbox \"d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09\""
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.688793762Z" level=info msg="StartContainer for \"e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09\""
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.775091935Z" level=info msg="StartContainer for \"e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09\" returns successfully"
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.814033614Z" level=info msg="shim disconnected" id=e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09 namespace=k8s.io
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.814107751Z" level=warning msg="cleaning up after shim disconnected" id=e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09 namespace=k8s.io
	Nov 27 11:07:30 addons-824928 containerd[686]: time="2023-11-27T11:07:30.814119603Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.483619315Z" level=info msg="StopPodSandbox for \"d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67\""
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.483704127Z" level=info msg="Container to stop \"e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.548496822Z" level=info msg="shim disconnected" id=d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67 namespace=k8s.io
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.548546334Z" level=warning msg="cleaning up after shim disconnected" id=d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67 namespace=k8s.io
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.548556711Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.621695882Z" level=info msg="TearDown network for sandbox \"d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67\" successfully"
	Nov 27 11:07:32 addons-824928 containerd[686]: time="2023-11-27T11:07:32.621993327Z" level=info msg="StopPodSandbox for \"d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67\" returns successfully"
	
	* 
	* ==> coredns [6cd1fe5e1a3947a01e59e80c75bd23f6e8aceac22fae18cf8a637a7d1f1acc7a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	* 
	* ==> describe nodes <==
	* Name:               addons-824928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-824928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f
	                    minikube.k8s.io/name=addons-824928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T11_05_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-824928
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-824928"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:05:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-824928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 11:07:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:07:10 +0000   Mon, 27 Nov 2023 11:05:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:07:10 +0000   Mon, 27 Nov 2023 11:05:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:07:10 +0000   Mon, 27 Nov 2023 11:05:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:07:10 +0000   Mon, 27 Nov 2023 11:05:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    addons-824928
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 36b437273c054dcfa23818c2b79152d9
	  System UUID:                36b43727-3c05-4dcf-a238-18c2b79152d9
	  Boot ID:                    17df0747-9d50-4aed-b206-e0ed8a5265de
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.9
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-d8fxb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  gadget                      gadget-wv867                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  gcp-auth                    gcp-auth-d4c87556c-kwhbd                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  headlamp                    headlamp-777fd4b855-htlfd                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-6l2vv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         94s
	  kube-system                 coredns-5dd5756b68-dnbtk                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     104s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 csi-hostpathplugin-msc8b                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 etcd-addons-824928                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-addons-824928                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-addons-824928                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-8tjwr                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-addons-824928                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 metrics-server-7c66d45ddc-jlsnc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                 snapshot-controller-58dbcc7b99-7vwrh                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 snapshot-controller-58dbcc7b99-l7srk                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  local-path-storage          helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-dl2rp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (12%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node addons-824928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node addons-824928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node addons-824928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node addons-824928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node addons-824928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node addons-824928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                116s                 kubelet          Node addons-824928 status is now: NodeReady
	  Normal  RegisteredNode           105s                 node-controller  Node addons-824928 event: Registered Node addons-824928 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.093819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.435789] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.399987] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140069] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.057443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov27 11:05] systemd-fstab-generator[556]: Ignoring "noauto" for root device
	[  +0.108349] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.144704] systemd-fstab-generator[580]: Ignoring "noauto" for root device
	[  +0.103940] systemd-fstab-generator[591]: Ignoring "noauto" for root device
	[  +0.245419] systemd-fstab-generator[618]: Ignoring "noauto" for root device
	[  +6.173396] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[ +21.085956] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +8.279005] systemd-fstab-generator[1342]: Ignoring "noauto" for root device
	[ +18.814812] kauditd_printk_skb: 30 callbacks suppressed
	[Nov27 11:06] kauditd_printk_skb: 50 callbacks suppressed
	[ +17.753671] kauditd_printk_skb: 38 callbacks suppressed
	[ +23.657963] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.481529] kauditd_printk_skb: 6 callbacks suppressed
	[Nov27 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.932747] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.027397] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.387802] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [66c492cd46877018dcf78e40a43534287c9b2dea0349391e335f5e2960f01b04] <==
	* {"level":"warn","ts":"2023-11-27T11:06:41.205622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.955356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81826"}
	{"level":"info","ts":"2023-11-27T11:06:41.205677Z","caller":"traceutil/trace.go:171","msg":"trace[1382072465] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:967; }","duration":"164.014951ms","start":"2023-11-27T11:06:41.041656Z","end":"2023-11-27T11:06:41.20567Z","steps":["trace[1382072465] 'agreement among raft nodes before linearized reading'  (duration: 163.223769ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:06:47.442054Z","caller":"traceutil/trace.go:171","msg":"trace[82044356] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"155.101082ms","start":"2023-11-27T11:06:47.286923Z","end":"2023-11-27T11:06:47.442024Z","steps":["trace[82044356] 'process raft request'  (duration: 154.475932ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:09.992997Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.03674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10869"}
	{"level":"info","ts":"2023-11-27T11:07:09.993047Z","caller":"traceutil/trace.go:171","msg":"trace[1014567922] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1090; }","duration":"117.105911ms","start":"2023-11-27T11:07:09.87593Z","end":"2023-11-27T11:07:09.993036Z","steps":["trace[1014567922] 'range keys from in-memory index tree'  (duration: 116.938191ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.252553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.686508ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932441215023404447 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" mod_revision:1181 > success:<request_put:<key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" value_size:818 lease:2709069178168627919 >> failure:<request_range:<key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-27T11:07:20.252649Z","caller":"traceutil/trace.go:171","msg":"trace[643766239] linearizableReadLoop","detail":"{readStateIndex:1220; appliedIndex:1219; }","duration":"398.561085ms","start":"2023-11-27T11:07:19.854078Z","end":"2023-11-27T11:07:20.252639Z","steps":["trace[643766239] 'read index received'  (duration: 288.751946ms)","trace[643766239] 'applied index is now lower than readState.Index'  (duration: 109.808177ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T11:07:20.252709Z","caller":"traceutil/trace.go:171","msg":"trace[1049422620] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"400.698226ms","start":"2023-11-27T11:07:19.851994Z","end":"2023-11-27T11:07:20.252692Z","steps":["trace[1049422620] 'process raft request'  (duration: 290.881289ms)","trace[1049422620] 'compare'  (duration: 104.524104ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-27T11:07:20.252795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.677094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-824928\" ","response":"range_response_count:1 size:7095"}
	{"level":"info","ts":"2023-11-27T11:07:20.252819Z","caller":"traceutil/trace.go:171","msg":"trace[1453020588] range","detail":"{range_begin:/registry/minions/addons-824928; range_end:; response_count:1; response_revision:1182; }","duration":"398.75641ms","start":"2023-11-27T11:07:19.854054Z","end":"2023-11-27T11:07:20.25281Z","steps":["trace[1453020588] 'agreement among raft nodes before linearized reading'  (duration: 398.645477ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.252849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T11:07:19.851976Z","time spent":"400.836341ms","remote":"127.0.0.1:44482","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":886,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" mod_revision:1181 > success:<request_put:<key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" value_size:818 lease:2709069178168627919 >> failure:<request_range:<key:\"/registry/events/default/test-pvc.179b7631a19a26f5\" > >"}
	{"level":"warn","ts":"2023-11-27T11:07:20.252838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T11:07:19.854042Z","time spent":"398.791163ms","remote":"127.0.0.1:44504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":7119,"request content":"key:\"/registry/minions/addons-824928\" "}
	{"level":"warn","ts":"2023-11-27T11:07:20.254154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.185429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2023-11-27T11:07:20.254174Z","caller":"traceutil/trace.go:171","msg":"trace[1607140580] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1182; }","duration":"309.214215ms","start":"2023-11-27T11:07:19.944954Z","end":"2023-11-27T11:07:20.254168Z","steps":["trace[1607140580] 'agreement among raft nodes before linearized reading'  (duration: 309.154156ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.254188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.973259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"warn","ts":"2023-11-27T11:07:20.254198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T11:07:19.944937Z","time spent":"309.257368ms","remote":"127.0.0.1:44500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":1436,"request content":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" "}
	{"level":"info","ts":"2023-11-27T11:07:20.254209Z","caller":"traceutil/trace.go:171","msg":"trace[1524211423] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1182; }","duration":"140.99691ms","start":"2023-11-27T11:07:20.113206Z","end":"2023-11-27T11:07:20.254203Z","steps":["trace[1524211423] 'agreement among raft nodes before linearized reading'  (duration: 140.936919ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.254281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.233213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-27T11:07:20.254295Z","caller":"traceutil/trace.go:171","msg":"trace[648801914] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1182; }","duration":"187.247719ms","start":"2023-11-27T11:07:20.067042Z","end":"2023-11-27T11:07:20.25429Z","steps":["trace[648801914] 'agreement among raft nodes before linearized reading'  (duration: 187.219046ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.254375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.359901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2023-11-27T11:07:20.254387Z","caller":"traceutil/trace.go:171","msg":"trace[1174022272] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1182; }","duration":"217.39836ms","start":"2023-11-27T11:07:20.036985Z","end":"2023-11-27T11:07:20.254384Z","steps":["trace[1174022272] 'agreement among raft nodes before linearized reading'  (duration: 217.371789ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:20.25443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.413846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:501"}
	{"level":"info","ts":"2023-11-27T11:07:20.254446Z","caller":"traceutil/trace.go:171","msg":"trace[1184292827] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1182; }","duration":"273.430408ms","start":"2023-11-27T11:07:19.98101Z","end":"2023-11-27T11:07:20.254441Z","steps":["trace[1184292827] 'agreement among raft nodes before linearized reading'  (duration: 273.395016ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:07:22.465704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.47555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2023-11-27T11:07:22.465887Z","caller":"traceutil/trace.go:171","msg":"trace[329927035] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1208; }","duration":"120.663123ms","start":"2023-11-27T11:07:22.345206Z","end":"2023-11-27T11:07:22.465869Z","steps":["trace[329927035] 'range keys from in-memory index tree'  (duration: 120.205507ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a482f158fb19d95cf7d402f57c6e9d2e46e10fb04f6680dde7a119ca1f053566] <==
	* 2023/11/27 11:07:12 GCP Auth Webhook started!
	2023/11/27 11:07:14 Ready to marshal response ...
	2023/11/27 11:07:14 Ready to write response ...
	2023/11/27 11:07:15 Ready to marshal response ...
	2023/11/27 11:07:15 Ready to write response ...
	2023/11/27 11:07:15 Ready to marshal response ...
	2023/11/27 11:07:15 Ready to write response ...
	2023/11/27 11:07:18 Ready to marshal response ...
	2023/11/27 11:07:18 Ready to write response ...
	2023/11/27 11:07:19 Ready to marshal response ...
	2023/11/27 11:07:19 Ready to write response ...
	2023/11/27 11:07:20 Ready to marshal response ...
	2023/11/27 11:07:20 Ready to write response ...
	2023/11/27 11:07:23 Ready to marshal response ...
	2023/11/27 11:07:23 Ready to write response ...
	2023/11/27 11:07:26 Ready to marshal response ...
	2023/11/27 11:07:26 Ready to write response ...
	2023/11/27 11:07:33 Ready to marshal response ...
	2023/11/27 11:07:33 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:07:34 up 2 min,  0 users,  load average: 3.16, 1.62, 0.63
	Linux addons-824928 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [379f4d6edc49a5fa206123dbd4f0a820d4d98136b361ebda9e0c5c0591bfebcb] <==
	* I1127 11:06:00.682363       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.109.30.10"}
	I1127 11:06:00.782449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W1127 11:06:01.729859       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1127 11:06:03.010581       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.102.224"}
	I1127 11:06:03.061623       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1127 11:06:03.320370       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.59.177"}
	W1127 11:06:04.346131       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1127 11:06:05.593140       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.44.154"}
	E1127 11:06:27.738654       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.82.21:443: connect: connection refused
	W1127 11:06:27.739903       1 handler_proxy.go:93] no RequestInfo found in the context
	E1127 11:06:27.739961       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1127 11:06:27.740573       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.82.21:443: connect: connection refused
	I1127 11:06:27.740625       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1127 11:06:27.744605       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.82.21:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.82.21:443: connect: connection refused
	I1127 11:06:27.829693       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1127 11:06:34.790359       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1127 11:07:14.993429       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.199.3"}
	I1127 11:07:26.544605       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1127 11:07:26.784152       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.95.10"}
	I1127 11:07:34.547199       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1127 11:07:34.565358       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1127 11:07:34.791928       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [2f552f2f47593445e48ed8bc9701d6ca8e86dfb3800273820d05d4818c5a0d35] <==
	* I1127 11:06:56.251613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="9.887905ms"
	I1127 11:06:56.252646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="111.241µs"
	I1127 11:07:12.103510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="99.883µs"
	I1127 11:07:13.123224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="16.791559ms"
	I1127 11:07:13.123583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="158.546µs"
	I1127 11:07:15.028196       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-777fd4b855 to 1"
	I1127 11:07:15.077516       1 event.go:307] "Event occurred" object="headlamp/headlamp-777fd4b855" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-777fd4b855-htlfd"
	I1127 11:07:15.098559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="69.869329ms"
	I1127 11:07:15.111058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="12.353373ms"
	I1127 11:07:15.112036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="102.164µs"
	I1127 11:07:15.118089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="348.468µs"
	I1127 11:07:19.025336       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1127 11:07:19.038327       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1127 11:07:19.094242       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1127 11:07:19.102449       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1127 11:07:19.557491       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1127 11:07:19.800150       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1127 11:07:19.821468       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1127 11:07:21.164891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="165.378µs"
	I1127 11:07:22.186481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="9.331898ms"
	I1127 11:07:22.187639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="56.067µs"
	I1127 11:07:25.121916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="37.286632ms"
	I1127 11:07:25.122642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="164.505µs"
	I1127 11:07:25.969524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.135µs"
	I1127 11:07:28.315080       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="40.263µs"
	
	* 
	* ==> kube-proxy [b857e9fe5d74ac1583f97f400bfd2aef259887bccfef09092911fb9c39d5ca31] <==
	* I1127 11:05:51.421835       1 server_others.go:69] "Using iptables proxy"
	I1127 11:05:51.433309       1 node.go:141] Successfully retrieved node IP: 192.168.39.110
	I1127 11:05:51.533163       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1127 11:05:51.533183       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1127 11:05:51.536059       1 server_others.go:152] "Using iptables Proxier"
	I1127 11:05:51.536144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 11:05:51.536454       1 server.go:846] "Version info" version="v1.28.4"
	I1127 11:05:51.536490       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 11:05:51.537722       1 config.go:188] "Starting service config controller"
	I1127 11:05:51.537913       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 11:05:51.538023       1 config.go:97] "Starting endpoint slice config controller"
	I1127 11:05:51.538029       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 11:05:51.539664       1 config.go:315] "Starting node config controller"
	I1127 11:05:51.539705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 11:05:51.648514       1 shared_informer.go:318] Caches are synced for node config
	I1127 11:05:51.648565       1 shared_informer.go:318] Caches are synced for service config
	I1127 11:05:51.648586       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [db0fc1fb6d6ec49a4bb0b7134f596c2ab435ccde9f4107725eab928aaf2689cf] <==
	* E1127 11:05:34.886251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:05:34.886263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:05:34.886335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:05:34.886344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 11:05:35.697603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 11:05:35.697652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 11:05:35.714083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 11:05:35.714137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 11:05:35.784110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:05:35.784164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 11:05:35.803126       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:05:35.803175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 11:05:35.810650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:05:35.810706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 11:05:35.901964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:05:35.902020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 11:05:36.032581       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 11:05:36.032636       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1127 11:05:36.042370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 11:05:36.042419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 11:05:36.057168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:05:36.057220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 11:05:36.100944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:05:36.100996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1127 11:05:39.135991       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-11-27 11:04:49 UTC, ends at Mon 2023-11-27 11:07:35 UTC. --
	Nov 27 11:07:31 addons-824928 kubelet[1349]: W1127 11:07:31.032231    1349 watcher.go:93] Error while processing event ("/sys/fs/cgroup/perf_event/kubepods/besteffort/pod5ef3ddaf-cd65-439c-96d1-004f5009e259/e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/perf_event/kubepods/besteffort/pod5ef3ddaf-cd65-439c-96d1-004f5009e259/e9c8065639c8bfb47a3931163147bc7e23f532f4004aba19720f89a7635f6a09: no such file or directory
	Nov 27 11:07:31 addons-824928 kubelet[1349]: E1127 11:07:31.057631    1349 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod15304132-0f36-4a37-9112-fe10682b59ad\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/poda24cd451-a8f2-4e37-874c-651af1e7dec9/6e8e24336d11ea2d7f705ff337314f520b52eab03cf6f92e1fdf8bc7c1f684d7\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/pod1512f5af-41ad-4c42-b3cf-bb5f55a5eb30/510a7cf61e14219ec09bad33b207913b1bfa790c02bdf5df32c22b15df2778dd\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/podcf498dca-33b0-4a09-bd1a-f75249689b04\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/poda24cd451-a8f2-4e37-874c-651af1e7dec9\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/pod1512f5af-41ad-4c42-b3cf-bb5f55a5eb30\": RecentStats: unable to find data in memory cach
e], [\"/kubepods/besteffort/pod236d89eb-a459-4607-9b64-24472b9972b9/4d5cb481d7f9a20eba00f23a6ea3c3a54fe8caf63ba626b207d064649c9ea799\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/poda24cd451-a8f2-4e37-874c-651af1e7dec9/18d9bc7257b9766192ce5bd65dff74b2e4aaa3d75ae755eb81928afec4eac2fd\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/podcf498dca-33b0-4a09-bd1a-f75249689b04/bd3a36d9064ff8485b8bd753625d3190b30c90f977eed83d3759096b5781431c\": RecentStats: unable to find data in memory cache]"
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.651051    1349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=4.25581646 podCreationTimestamp="2023-11-27 11:07:26 +0000 UTC" firstStartedPulling="2023-11-27 11:07:27.893872714 +0000 UTC m=+109.904350784" lastFinishedPulling="2023-11-27 11:07:30.289061889 +0000 UTC m=+112.299539970" observedRunningTime="2023-11-27 11:07:31.508846895 +0000 UTC m=+113.519324987" watchObservedRunningTime="2023-11-27 11:07:32.651005646 +0000 UTC m=+114.661483735"
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.776087    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-gcp-creds\") pod \"5ef3ddaf-cd65-439c-96d1-004f5009e259\" (UID: \"5ef3ddaf-cd65-439c-96d1-004f5009e259\") "
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.776260    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-pvc-3ce46789-1491-423b-b053-97f61c7a2812\") pod \"5ef3ddaf-cd65-439c-96d1-004f5009e259\" (UID: \"5ef3ddaf-cd65-439c-96d1-004f5009e259\") "
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.776332    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57bl2\" (UniqueName: \"kubernetes.io/projected/5ef3ddaf-cd65-439c-96d1-004f5009e259-kube-api-access-57bl2\") pod \"5ef3ddaf-cd65-439c-96d1-004f5009e259\" (UID: \"5ef3ddaf-cd65-439c-96d1-004f5009e259\") "
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.776851    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-pvc-3ce46789-1491-423b-b053-97f61c7a2812" (OuterVolumeSpecName: "data") pod "5ef3ddaf-cd65-439c-96d1-004f5009e259" (UID: "5ef3ddaf-cd65-439c-96d1-004f5009e259"). InnerVolumeSpecName "pvc-3ce46789-1491-423b-b053-97f61c7a2812". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.777282    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5ef3ddaf-cd65-439c-96d1-004f5009e259" (UID: "5ef3ddaf-cd65-439c-96d1-004f5009e259"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.784492    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef3ddaf-cd65-439c-96d1-004f5009e259-kube-api-access-57bl2" (OuterVolumeSpecName: "kube-api-access-57bl2") pod "5ef3ddaf-cd65-439c-96d1-004f5009e259" (UID: "5ef3ddaf-cd65-439c-96d1-004f5009e259"). InnerVolumeSpecName "kube-api-access-57bl2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.877576    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-57bl2\" (UniqueName: \"kubernetes.io/projected/5ef3ddaf-cd65-439c-96d1-004f5009e259-kube-api-access-57bl2\") on node \"addons-824928\" DevicePath \"\""
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.877610    1349 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-gcp-creds\") on node \"addons-824928\" DevicePath \"\""
	Nov 27 11:07:32 addons-824928 kubelet[1349]: I1127 11:07:32.877633    1349 reconciler_common.go:300] "Volume detached for volume \"pvc-3ce46789-1491-423b-b053-97f61c7a2812\" (UniqueName: \"kubernetes.io/host-path/5ef3ddaf-cd65-439c-96d1-004f5009e259-pvc-3ce46789-1491-423b-b053-97f61c7a2812\") on node \"addons-824928\" DevicePath \"\""
	Nov 27 11:07:33 addons-824928 kubelet[1349]: I1127 11:07:33.492729    1349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ac13a7f865f50e2ac277a9f17fe44df394250b7dbd3f949d3b273c90ed6d67"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: I1127 11:07:33.918180    1349 topology_manager.go:215] "Topology Admit Handler" podUID="792972b1-04f2-4831-bb4d-b844abd8116e" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: E1127 11:07:33.919051    1349 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ef3ddaf-cd65-439c-96d1-004f5009e259" containerName="busybox"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: E1127 11:07:33.919218    1349 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="236d89eb-a459-4607-9b64-24472b9972b9" containerName="registry"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: E1127 11:07:33.919341    1349 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf498dca-33b0-4a09-bd1a-f75249689b04" containerName="registry-proxy"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: I1127 11:07:33.919578    1349 memory_manager.go:346] "RemoveStaleState removing state" podUID="cf498dca-33b0-4a09-bd1a-f75249689b04" containerName="registry-proxy"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: I1127 11:07:33.919691    1349 memory_manager.go:346] "RemoveStaleState removing state" podUID="236d89eb-a459-4607-9b64-24472b9972b9" containerName="registry"
	Nov 27 11:07:33 addons-824928 kubelet[1349]: I1127 11:07:33.919842    1349 memory_manager.go:346] "RemoveStaleState removing state" podUID="5ef3ddaf-cd65-439c-96d1-004f5009e259" containerName="busybox"
	Nov 27 11:07:34 addons-824928 kubelet[1349]: I1127 11:07:34.091154    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/792972b1-04f2-4831-bb4d-b844abd8116e-script\") pod \"helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812\" (UID: \"792972b1-04f2-4831-bb4d-b844abd8116e\") " pod="local-path-storage/helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812"
	Nov 27 11:07:34 addons-824928 kubelet[1349]: I1127 11:07:34.091296    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpt4\" (UniqueName: \"kubernetes.io/projected/792972b1-04f2-4831-bb4d-b844abd8116e-kube-api-access-7dpt4\") pod \"helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812\" (UID: \"792972b1-04f2-4831-bb4d-b844abd8116e\") " pod="local-path-storage/helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812"
	Nov 27 11:07:34 addons-824928 kubelet[1349]: I1127 11:07:34.091665    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/792972b1-04f2-4831-bb4d-b844abd8116e-gcp-creds\") pod \"helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812\" (UID: \"792972b1-04f2-4831-bb4d-b844abd8116e\") " pod="local-path-storage/helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812"
	Nov 27 11:07:34 addons-824928 kubelet[1349]: I1127 11:07:34.091866    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/792972b1-04f2-4831-bb4d-b844abd8116e-data\") pod \"helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812\" (UID: \"792972b1-04f2-4831-bb4d-b844abd8116e\") " pod="local-path-storage/helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812"
	Nov 27 11:07:34 addons-824928 kubelet[1349]: I1127 11:07:34.334575    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5ef3ddaf-cd65-439c-96d1-004f5009e259" path="/var/lib/kubelet/pods/5ef3ddaf-cd65-439c-96d1-004f5009e259/volumes"
	
	* 
	* ==> storage-provisioner [db2e6b43a3ecdc35896741455df085be7428d3a0cbff5afd3267b3a92a686bcd] <==
	* I1127 11:06:17.024547       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 11:06:17.061145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 11:06:17.061527       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 11:06:17.070066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 11:06:17.070693       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-824928_9a2c7e99-9cf1-43b8-997d-14a22e48f9e4!
	I1127 11:06:17.072079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75c653bf-1526-467a-807b-64ca9e1c0f07", APIVersion:"v1", ResourceVersion:"869", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-824928_9a2c7e99-9cf1-43b8-997d-14a22e48f9e4 became leader
	I1127 11:06:17.171976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-824928_9a2c7e99-9cf1-43b8-997d-14a22e48f9e4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-824928 -n addons-824928
helpers_test.go:261: (dbg) Run:  kubectl --context addons-824928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-wv867 ingress-nginx-admission-create-jx2xm ingress-nginx-admission-patch-xrsln helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-824928 describe pod gadget-wv867 ingress-nginx-admission-create-jx2xm ingress-nginx-admission-patch-xrsln helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-824928 describe pod gadget-wv867 ingress-nginx-admission-create-jx2xm ingress-nginx-admission-patch-xrsln helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812: exit status 1 (64.059029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-wv867" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-jx2xm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xrsln" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-824928 describe pod gadget-wv867 ingress-nginx-admission-create-jx2xm ingress-nginx-admission-patch-xrsln helper-pod-delete-pvc-3ce46789-1491-423b-b053-97f61c7a2812: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (8.93s)

                                                
                                    
x
+
TestErrorSpam/setup (64.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-065332 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-065332 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-065332 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-065332 --driver=kvm2  --container-runtime=containerd: (1m4.037944683s)
error_spam_test.go:96: unexpected stderr: "X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory"
error_spam_test.go:110: minikube stdout:
* [nospam-065332] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17644
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-065332 in cluster nospam-065332
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on containerd 1.7.9 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-065332" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17644-333834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
--- FAIL: TestErrorSpam/setup (64.04s)

                                                
                                    

Test pass (268/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.31
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 5.64
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.59
20 TestOffline 129.14
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 157.46
27 TestAddons/parallel/Registry 15.07
28 TestAddons/parallel/Ingress 22.33
29 TestAddons/parallel/InspektorGadget 11.44
31 TestAddons/parallel/HelmTiller 12.68
33 TestAddons/parallel/CSI 48.03
34 TestAddons/parallel/Headlamp 13.66
35 TestAddons/parallel/CloudSpanner 5.67
36 TestAddons/parallel/LocalPath 58.74
37 TestAddons/parallel/NvidiaDevicePlugin 5.83
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/StoppedEnableDisable 92.9
42 TestCertOptions 78.91
43 TestCertExpiration 293.83
45 TestForceSystemdFlag 101.78
46 TestForceSystemdEnv 128.33
48 TestKVMDriverInstallOrUpdate 2.86
53 TestErrorSpam/start 0.41
54 TestErrorSpam/status 0.81
55 TestErrorSpam/pause 1.6
56 TestErrorSpam/unpause 1.75
57 TestErrorSpam/stop 2.28
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 90.8
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6.25
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.73
69 TestFunctional/serial/CacheCmd/cache/add_local 1.63
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 43.46
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.49
80 TestFunctional/serial/LogsFileCmd 1.54
81 TestFunctional/serial/InvalidService 4.15
83 TestFunctional/parallel/ConfigCmd 0.47
84 TestFunctional/parallel/DashboardCmd 21.43
85 TestFunctional/parallel/DryRun 0.33
86 TestFunctional/parallel/InternationalLanguage 0.18
87 TestFunctional/parallel/StatusCmd 0.99
91 TestFunctional/parallel/ServiceCmdConnect 12.17
92 TestFunctional/parallel/AddonsCmd 0.18
93 TestFunctional/parallel/PersistentVolumeClaim 39.23
95 TestFunctional/parallel/SSHCmd 0.55
96 TestFunctional/parallel/CpCmd 1.1
97 TestFunctional/parallel/MySQL 29.71
98 TestFunctional/parallel/FileSync 0.26
99 TestFunctional/parallel/CertSync 1.59
103 TestFunctional/parallel/NodeLabels 0.1
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
107 TestFunctional/parallel/License 0.15
108 TestFunctional/parallel/ServiceCmd/DeployApp 10.28
118 TestFunctional/parallel/Version/short 0.08
119 TestFunctional/parallel/Version/components 1.42
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.38
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.26
125 TestFunctional/parallel/ImageCommands/Setup 0.88
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.13
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.99
128 TestFunctional/parallel/ServiceCmd/List 0.32
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
131 TestFunctional/parallel/ServiceCmd/Format 0.34
132 TestFunctional/parallel/ServiceCmd/URL 0.4
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.5
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
135 TestFunctional/parallel/MountCmd/any-port 6.77
136 TestFunctional/parallel/ProfileCmd/profile_list 0.36
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.94
142 TestFunctional/parallel/MountCmd/specific-port 1.93
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.47
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.84
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.02
153 TestIngressAddonLegacy/StartLegacyK8sCluster 124.23
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.96
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 37.6
160 TestJSONOutput/start/Command 118.79
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.67
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.62
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.11
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.23
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 128.08
192 TestMountStart/serial/StartWithMountFirst 31.6
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 30.24
195 TestMountStart/serial/VerifyMountSecond 0.4
196 TestMountStart/serial/DeleteFirst 0.67
197 TestMountStart/serial/VerifyMountPostDelete 0.41
198 TestMountStart/serial/Stop 1.14
199 TestMountStart/serial/RestartStopped 27.8
200 TestMountStart/serial/VerifyMountPostStop 0.43
203 TestMultiNode/serial/FreshStart2Nodes 131.27
204 TestMultiNode/serial/DeployApp2Nodes 4.29
205 TestMultiNode/serial/PingHostFrom2Pods 0.92
206 TestMultiNode/serial/AddNode 40.59
207 TestMultiNode/serial/ProfileList 0.22
208 TestMultiNode/serial/CopyFile 7.99
209 TestMultiNode/serial/StopNode 2.17
210 TestMultiNode/serial/StartAfterStop 28.09
211 TestMultiNode/serial/RestartKeepsNodes 312.54
212 TestMultiNode/serial/DeleteNode 1.81
213 TestMultiNode/serial/StopMultiNode 183.49
214 TestMultiNode/serial/RestartMultiNode 94.01
215 TestMultiNode/serial/ValidateNameConflict 67.29
220 TestPreload 243.92
222 TestScheduledStopUnix 140.38
226 TestRunningBinaryUpgrade 179.57
228 TestKubernetesUpgrade 189.58
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
232 TestNoKubernetes/serial/StartWithK8s 151.38
233 TestNoKubernetes/serial/StartWithStopK8s 51
234 TestNoKubernetes/serial/Start 30.11
235 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
236 TestNoKubernetes/serial/ProfileList 1.27
237 TestNoKubernetes/serial/Stop 12.28
238 TestNoKubernetes/serial/StartNoArgs 36.26
239 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
240 TestStoppedBinaryUpgrade/Setup 0.49
241 TestStoppedBinaryUpgrade/Upgrade 131.93
249 TestNetworkPlugins/group/false 4.96
261 TestPause/serial/Start 108.6
262 TestNetworkPlugins/group/auto/Start 128.66
263 TestNetworkPlugins/group/flannel/Start 140.72
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
265 TestNetworkPlugins/group/enable-default-cni/Start 126.04
266 TestPause/serial/SecondStartNoReconfiguration 54.11
267 TestPause/serial/Pause 1.2
268 TestPause/serial/VerifyStatus 0.35
269 TestPause/serial/Unpause 0.87
270 TestPause/serial/PauseAgain 1.12
271 TestPause/serial/DeletePaused 1.42
272 TestPause/serial/VerifyDeletedResources 1.78
273 TestNetworkPlugins/group/bridge/Start 127.92
274 TestNetworkPlugins/group/auto/KubeletFlags 0.25
275 TestNetworkPlugins/group/auto/NetCatPod 11.48
276 TestNetworkPlugins/group/auto/DNS 0.19
277 TestNetworkPlugins/group/auto/Localhost 0.2
278 TestNetworkPlugins/group/auto/HairPin 0.19
279 TestNetworkPlugins/group/flannel/ControllerPod 5.03
280 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
281 TestNetworkPlugins/group/flannel/NetCatPod 12.56
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.74
284 TestNetworkPlugins/group/calico/Start 126.5
285 TestNetworkPlugins/group/flannel/DNS 0.26
286 TestNetworkPlugins/group/flannel/Localhost 0.17
287 TestNetworkPlugins/group/flannel/HairPin 0.16
288 TestNetworkPlugins/group/enable-default-cni/DNS 26.53
289 TestNetworkPlugins/group/kindnet/Start 115.04
290 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
291 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
292 TestNetworkPlugins/group/custom-flannel/Start 132.08
293 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
294 TestNetworkPlugins/group/bridge/NetCatPod 12.63
295 TestNetworkPlugins/group/bridge/DNS 0.27
296 TestNetworkPlugins/group/bridge/Localhost 0.17
297 TestNetworkPlugins/group/bridge/HairPin 0.19
299 TestStartStop/group/old-k8s-version/serial/FirstStart 153.78
300 TestNetworkPlugins/group/calico/ControllerPod 5.03
301 TestNetworkPlugins/group/calico/KubeletFlags 0.29
302 TestNetworkPlugins/group/calico/NetCatPod 11.58
303 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
305 TestNetworkPlugins/group/kindnet/NetCatPod 12.56
306 TestNetworkPlugins/group/calico/DNS 0.26
307 TestNetworkPlugins/group/calico/Localhost 0.22
308 TestNetworkPlugins/group/calico/HairPin 0.28
309 TestNetworkPlugins/group/kindnet/DNS 0.3
310 TestNetworkPlugins/group/kindnet/Localhost 0.27
311 TestNetworkPlugins/group/kindnet/HairPin 0.23
313 TestStartStop/group/no-preload/serial/FirstStart 124.77
315 TestStartStop/group/embed-certs/serial/FirstStart 157.9
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.49
318 TestNetworkPlugins/group/custom-flannel/DNS 0.19
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.16
323 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
324 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
325 TestStartStop/group/old-k8s-version/serial/Stop 93.03
326 TestStartStop/group/no-preload/serial/DeployApp 8.58
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.32
328 TestStartStop/group/no-preload/serial/Stop 102.02
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.97
332 TestStartStop/group/embed-certs/serial/DeployApp 8.49
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
334 TestStartStop/group/embed-certs/serial/Stop 92.09
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
336 TestStartStop/group/old-k8s-version/serial/SecondStart 444.01
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
338 TestStartStop/group/no-preload/serial/SecondStart 332.99
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 364.87
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
342 TestStartStop/group/embed-certs/serial/SecondStart 347.49
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.04
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
346 TestStartStop/group/no-preload/serial/Pause 3.49
348 TestStartStop/group/newest-cni/serial/FirstStart 86.1
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.03
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.02
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
353 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.04
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.41
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
357 TestStartStop/group/embed-certs/serial/Pause 3.7
358 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
359 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
360 TestStartStop/group/old-k8s-version/serial/Pause 2.86
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.54
363 TestStartStop/group/newest-cni/serial/Stop 2.12
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
365 TestStartStop/group/newest-cni/serial/SecondStart 50.15
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
369 TestStartStop/group/newest-cni/serial/Pause 2.78
x
+
TestDownloadOnly/v1.16.0/json-events (8.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-626021 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-626021 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.305407422s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-626021
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-626021: exit status 85 (75.252625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |          |
	|         | -p download-only-626021        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:04:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:04:20.832567  341091 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:04:20.832851  341091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:20.832859  341091 out.go:309] Setting ErrFile to fd 2...
	I1127 11:04:20.832864  341091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:20.833026  341091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	W1127 11:04:20.833181  341091 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-333834/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-333834/.minikube/config/config.json: no such file or directory
	I1127 11:04:20.833772  341091 out.go:303] Setting JSON to true
	I1127 11:04:20.835396  341091 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6412,"bootTime":1701076649,"procs":1026,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:04:20.835465  341091 start.go:138] virtualization: kvm guest
	I1127 11:04:20.837774  341091 out.go:97] [download-only-626021] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:04:20.839324  341091 out.go:169] MINIKUBE_LOCATION=17644
	W1127 11:04:20.837895  341091 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 11:04:20.837978  341091 notify.go:220] Checking for updates...
	I1127 11:04:20.841031  341091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:04:20.842399  341091 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:04:20.843857  341091 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:04:20.845224  341091 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 11:04:20.847840  341091 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 11:04:20.848083  341091 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:04:20.879805  341091 out.go:97] Using the kvm2 driver based on user configuration
	I1127 11:04:20.879832  341091 start.go:298] selected driver: kvm2
	I1127 11:04:20.879837  341091 start.go:902] validating driver "kvm2" against <nil>
	I1127 11:04:20.880160  341091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:20.880244  341091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-333834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 11:04:20.895324  341091 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 11:04:20.895426  341091 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:04:20.896130  341091 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1127 11:04:20.896416  341091 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 11:04:20.896494  341091 cni.go:84] Creating CNI manager for ""
	I1127 11:04:20.896514  341091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1127 11:04:20.896528  341091 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 11:04:20.896546  341091 start_flags.go:323] config:
	{Name:download-only-626021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-626021 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:20.896929  341091 iso.go:125] acquiring lock: {Name:mkc3926f78de4c185660124f00819d5068cd8c03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:20.898957  341091 out.go:97] Downloading VM boot image ...
	I1127 11:04:20.899002  341091 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17644-333834/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1127 11:04:23.699089  341091 out.go:97] Starting control plane node download-only-626021 in cluster download-only-626021
	I1127 11:04:23.699116  341091 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1127 11:04:23.731636  341091 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1127 11:04:23.731675  341091 cache.go:56] Caching tarball of preloaded images
	I1127 11:04:23.731843  341091 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1127 11:04:23.733615  341091 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 11:04:23.733629  341091 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1127 11:04:23.773803  341091 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-626021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-626021 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-626021 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.634989649s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-626021
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-626021: exit status 85 (77.815413ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |          |
	|         | -p download-only-626021        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-626021 | jenkins | v1.32.0 | 27 Nov 23 11:04 UTC |          |
	|         | -p download-only-626021        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:04:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:04:29.216362  341136 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:04:29.216669  341136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:29.216680  341136 out.go:309] Setting ErrFile to fd 2...
	I1127 11:04:29.216688  341136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:29.216874  341136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	W1127 11:04:29.217015  341136 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-333834/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-333834/.minikube/config/config.json: no such file or directory
	I1127 11:04:29.217525  341136 out.go:303] Setting JSON to true
	I1127 11:04:29.219155  341136 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6420,"bootTime":1701076649,"procs":910,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:04:29.219227  341136 start.go:138] virtualization: kvm guest
	I1127 11:04:29.221303  341136 out.go:97] [download-only-626021] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:04:29.222914  341136 out.go:169] MINIKUBE_LOCATION=17644
	I1127 11:04:29.221489  341136 notify.go:220] Checking for updates...
	I1127 11:04:29.225595  341136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:04:29.226982  341136 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:04:29.228275  341136 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:04:29.229541  341136 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 11:04:29.232091  341136 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 11:04:29.232563  341136 config.go:182] Loaded profile config "download-only-626021": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1127 11:04:29.232605  341136 start.go:810] api.Load failed for download-only-626021: filestore "download-only-626021": Docker machine "download-only-626021" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 11:04:29.232698  341136 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 11:04:29.232730  341136 start.go:810] api.Load failed for download-only-626021: filestore "download-only-626021": Docker machine "download-only-626021" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 11:04:29.264333  341136 out.go:97] Using the kvm2 driver based on existing profile
	I1127 11:04:29.264365  341136 start.go:298] selected driver: kvm2
	I1127 11:04:29.264370  341136 start.go:902] validating driver "kvm2" against &{Name:download-only-626021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-626021 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:29.264760  341136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:29.264831  341136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-333834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 11:04:29.279032  341136 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 11:04:29.280005  341136 cni.go:84] Creating CNI manager for ""
	I1127 11:04:29.280034  341136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1127 11:04:29.280053  341136 start_flags.go:323] config:
	{Name:download-only-626021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-626021 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:29.280260  341136 iso.go:125] acquiring lock: {Name:mkc3926f78de4c185660124f00819d5068cd8c03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:04:29.281971  341136 out.go:97] Starting control plane node download-only-626021 in cluster download-only-626021
	I1127 11:04:29.281987  341136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1127 11:04:29.315999  341136 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1127 11:04:29.316047  341136 cache.go:56] Caching tarball of preloaded images
	I1127 11:04:29.316223  341136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1127 11:04:29.318021  341136 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 11:04:29.318068  341136 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1127 11:04:29.359228  341136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1127 11:04:33.256705  341136 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1127 11:04:33.256824  341136 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17644-333834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1127 11:04:34.182689  341136 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1127 11:04:34.182834  341136 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/download-only-626021/config.json ...
	I1127 11:04:34.183025  341136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1127 11:04:34.183193  341136 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17644-333834/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-626021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-626021
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-302643 --alsologtostderr --binary-mirror http://127.0.0.1:35471 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-302643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-302643
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (129.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-208091 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-208091 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m7.331538296s)
helpers_test.go:175: Cleaning up "offline-containerd-208091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-208091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-208091: (1.804136141s)
--- PASS: TestOffline (129.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-824928
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-824928: exit status 85 (75.339672ms)

                                                
                                                
-- stdout --
	* Profile "addons-824928" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-824928"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-824928
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-824928: exit status 85 (74.108893ms)

                                                
                                                
-- stdout --
	* Profile "addons-824928" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-824928"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (157.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-824928 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-824928 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.463069512s)
--- PASS: TestAddons/Setup (157.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 21.816381ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cbh69" [236d89eb-a459-4607-9b64-24472b9972b9] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021275354s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vchfz" [cf498dca-33b0-4a09-bd1a-f75249689b04] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020671989s
addons_test.go:339: (dbg) Run:  kubectl --context addons-824928 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-824928 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-824928 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.967081616s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 ip
2023/11/27 11:07:27 [DEBUG] GET http://192.168.39.110:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-824928 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-824928 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-824928 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9bdee775-f532-4ac1-8115-c0882d645f07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9bdee775-f532-4ac1-8115-c0882d645f07] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.018829774s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-824928 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.110
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-824928 addons disable ingress-dns --alsologtostderr -v=1: (1.989741375s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-824928 addons disable ingress --alsologtostderr -v=1: (7.786498674s)
--- PASS: TestAddons/parallel/Ingress (22.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wv867" [2c2bc7bc-e9c0-4700-8686-27f80a5e14d6] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01518138s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-824928
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-824928: (6.426283297s)
--- PASS: TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 21.917987ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9ncbh" [a24cd451-a8f2-4e37-874c-651af1e7dec9] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.022346319s
addons_test.go:472: (dbg) Run:  kubectl --context addons-824928 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-824928 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.922262565s)
addons_test.go:477: kubectl --context addons-824928 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 5.351784ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-824928 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-824928 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [06576727-6dfd-4714-8a6d-ba0a17807b97] Pending
helpers_test.go:344: "task-pv-pod" [06576727-6dfd-4714-8a6d-ba0a17807b97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [06576727-6dfd-4714-8a6d-ba0a17807b97] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.028251952s
addons_test.go:583: (dbg) Run:  kubectl --context addons-824928 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-824928 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-824928 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-824928 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-824928 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-824928 delete pod task-pv-pod: (1.382107529s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-824928 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-824928 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-824928 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [70c47493-cd3e-478d-be5d-b94f2a94c46a] Pending
helpers_test.go:344: "task-pv-pod-restore" [70c47493-cd3e-478d-be5d-b94f2a94c46a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [70c47493-cd3e-478d-be5d-b94f2a94c46a] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.016918492s
addons_test.go:625: (dbg) Run:  kubectl --context addons-824928 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-824928 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-824928 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-824928 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.744852647s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-824928 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-824928 --alsologtostderr -v=1: (1.628716498s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-htlfd" [00d61bc4-7859-43b7-97f6-10835b468f24] Pending
helpers_test.go:344: "headlamp-777fd4b855-htlfd" [00d61bc4-7859-43b7-97f6-10835b468f24] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-htlfd" [00d61bc4-7859-43b7-97f6-10835b468f24] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-htlfd" [00d61bc4-7859-43b7-97f6-10835b468f24] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.030087309s
--- PASS: TestAddons/parallel/Headlamp (13.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-d8fxb" [b9dcf6c4-e0fe-422e-a0dc-23b726d8da37] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.023431762s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-824928
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-824928 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-824928 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5ef3ddaf-cd65-439c-96d1-004f5009e259] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5ef3ddaf-cd65-439c-96d1-004f5009e259] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5ef3ddaf-cd65-439c-96d1-004f5009e259] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.019588276s
addons_test.go:890: (dbg) Run:  kubectl --context addons-824928 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 ssh "cat /opt/local-path-provisioner/pvc-3ce46789-1491-423b-b053-97f61c7a2812_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-824928 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-824928 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-824928 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-824928 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.083349153s)
--- PASS: TestAddons/parallel/LocalPath (58.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r5lrw" [de9a71c2-da70-468d-b3ff-5f1197f11582] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.039067755s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-824928
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-824928 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-824928 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.9s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-824928
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-824928: (1m32.565209337s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-824928
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-824928
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-824928
--- PASS: TestAddons/StoppedEnableDisable (92.90s)

                                                
                                    
x
+
TestCertOptions (78.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-589356 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-589356 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m17.033880484s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-589356 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-589356 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-589356 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-589356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-589356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-589356: (1.259299384s)
--- PASS: TestCertOptions (78.91s)

                                                
                                    
x
+
TestCertExpiration (293.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-349610 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-349610 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m22.583628753s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-349610 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-349610 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (29.64719072s)
helpers_test.go:175: Cleaning up "cert-expiration-349610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-349610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-349610: (1.595299797s)
--- PASS: TestCertExpiration (293.83s)

                                                
                                    
x
+
TestForceSystemdFlag (101.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-396747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-396747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m40.28167761s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-396747 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-396747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-396747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-396747: (1.257623233s)
--- PASS: TestForceSystemdFlag (101.78s)

                                                
                                    
x
+
TestForceSystemdEnv (128.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-310775 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-310775 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m6.353120885s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-310775 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-310775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-310775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-310775: (1.550098592s)
--- PASS: TestForceSystemdEnv (128.33s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.86s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 stop: (2.104071088s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-065332 --log_dir /tmp/nospam-065332 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17644-333834/.minikube/files/etc/test/nested/copy/341079/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1127 11:12:13.459628  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.465543  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.475778  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.496079  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.536385  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.616764  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:13.777191  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:14.097737  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:14.738700  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:16.018998  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:18.580781  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:23.700963  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:33.941621  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:12:54.422637  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-087934 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m30.799277186s)
--- PASS: TestFunctional/serial/StartWithProxy (90.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-087934 --alsologtostderr -v=8: (6.251316282s)
functional_test.go:659: soft start took 6.252072791s for "functional-087934" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-087934 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:3.1: (1.158147082s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:3.3: (1.290061502s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 cache add registry.k8s.io/pause:latest: (1.280513946s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-087934 /tmp/TestFunctionalserialCacheCmdcacheadd_local4204892257/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache add minikube-local-cache-test:functional-087934
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 cache add minikube-local-cache-test:functional-087934: (1.293274379s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache delete minikube-local-cache-test:functional-087934
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-087934
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.013875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 cache reload: (1.4403958s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 kubectl -- --context functional-087934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-087934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 11:13:35.382994  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-087934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.456401836s)
functional_test.go:757: restart took 43.456547287s for "functional-087934" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-087934 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 logs: (1.493881606s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 logs --file /tmp/TestFunctionalserialLogsFileCmd3238664387/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 logs --file /tmp/TestFunctionalserialLogsFileCmd3238664387/001/logs.txt: (1.537034572s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-087934 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-087934
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-087934: exit status 115 (308.878232ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.214:30976 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-087934 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 config get cpus: exit status 14 (83.836499ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 config get cpus: exit status 14 (62.35731ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-087934 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-087934 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 348433: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-087934 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (171.338527ms)

                                                
                                                
-- stdout --
	* [functional-087934] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:14:17.307319  347346 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:14:17.307464  347346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:14:17.307477  347346 out.go:309] Setting ErrFile to fd 2...
	I1127 11:14:17.307485  347346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:14:17.307697  347346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:14:17.308351  347346 out.go:303] Setting JSON to false
	I1127 11:14:17.309498  347346 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7009,"bootTime":1701076649,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:14:17.309577  347346 start.go:138] virtualization: kvm guest
	I1127 11:14:17.311780  347346 out.go:177] * [functional-087934] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:14:17.313334  347346 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:14:17.313407  347346 notify.go:220] Checking for updates...
	I1127 11:14:17.314826  347346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:14:17.316452  347346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:14:17.318071  347346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:14:17.319628  347346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:14:17.321126  347346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:14:17.323344  347346 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:14:17.324008  347346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:14:17.324086  347346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:14:17.341050  347346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I1127 11:14:17.341534  347346 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:14:17.342138  347346 main.go:141] libmachine: Using API Version  1
	I1127 11:14:17.342161  347346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:14:17.342575  347346 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:14:17.342804  347346 main.go:141] libmachine: (functional-087934) Calling .DriverName
	I1127 11:14:17.343020  347346 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:14:17.343392  347346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:14:17.343440  347346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:14:17.360176  347346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I1127 11:14:17.360734  347346 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:14:17.361363  347346 main.go:141] libmachine: Using API Version  1
	I1127 11:14:17.361397  347346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:14:17.361888  347346 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:14:17.362113  347346 main.go:141] libmachine: (functional-087934) Calling .DriverName
	I1127 11:14:17.406410  347346 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 11:14:17.407893  347346 start.go:298] selected driver: kvm2
	I1127 11:14:17.407909  347346 start.go:902] validating driver "kvm2" against &{Name:functional-087934 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-087934 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.214 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:14:17.408056  347346 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:14:17.410240  347346 out.go:177] 
	W1127 11:14:17.411833  347346 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 11:14:17.413309  347346 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-087934 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-087934 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (174.775263ms)

                                                
                                                
-- stdout --
	* [functional-087934] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:14:17.642282  347434 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:14:17.642463  347434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:14:17.642475  347434 out.go:309] Setting ErrFile to fd 2...
	I1127 11:14:17.642480  347434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:14:17.642822  347434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:14:17.643453  347434 out.go:303] Setting JSON to false
	I1127 11:14:17.645190  347434 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7009,"bootTime":1701076649,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:14:17.645331  347434 start.go:138] virtualization: kvm guest
	I1127 11:14:17.647518  347434 out.go:177] * [functional-087934] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1127 11:14:17.649180  347434 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:14:17.649266  347434 notify.go:220] Checking for updates...
	I1127 11:14:17.652085  347434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:14:17.653683  347434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:14:17.654995  347434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:14:17.656397  347434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:14:17.657754  347434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:14:17.659590  347434 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:14:17.659990  347434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:14:17.660042  347434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:14:17.676707  347434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I1127 11:14:17.677215  347434 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:14:17.677759  347434 main.go:141] libmachine: Using API Version  1
	I1127 11:14:17.677782  347434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:14:17.678222  347434 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:14:17.678399  347434 main.go:141] libmachine: (functional-087934) Calling .DriverName
	I1127 11:14:17.678718  347434 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:14:17.679072  347434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:14:17.679115  347434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:14:17.694413  347434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I1127 11:14:17.697403  347434 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:14:17.697911  347434 main.go:141] libmachine: Using API Version  1
	I1127 11:14:17.697937  347434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:14:17.698294  347434 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:14:17.698496  347434 main.go:141] libmachine: (functional-087934) Calling .DriverName
	I1127 11:14:17.736458  347434 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1127 11:14:17.737912  347434 start.go:298] selected driver: kvm2
	I1127 11:14:17.737925  347434 start.go:902] validating driver "kvm2" against &{Name:functional-087934 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-087934 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.214 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:14:17.738049  347434 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:14:17.740454  347434 out.go:177] 
	W1127 11:14:17.741824  347434 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 11:14:17.743226  347434 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-087934 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-087934 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-2pq7b" [2a75b09f-547c-444a-a55c-46f8269debf7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-2pq7b" [2a75b09f-547c-444a-a55c-46f8269debf7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.018075626s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.214:32726
functional_test.go:1674: http://192.168.39.214:32726: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-2pq7b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.214:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.214:32726
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fcff34aa-d3ac-4251-bf6d-b864241ca783] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013517188s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-087934 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-087934 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-087934 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-087934 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-087934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c03fcab-2c2e-490a-9801-9d9920c97d9e] Pending
helpers_test.go:344: "sp-pod" [6c03fcab-2c2e-490a-9801-9d9920c97d9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c03fcab-2c2e-490a-9801-9d9920c97d9e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.031760445s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-087934 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-087934 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-087934 delete -f testdata/storage-provisioner/pod.yaml: (1.511236972s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-087934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [05afa871-ccea-46aa-9d25-ac57879c9520] Pending
helpers_test.go:344: "sp-pod" [05afa871-ccea-46aa-9d25-ac57879c9520] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [05afa871-ccea-46aa-9d25-ac57879c9520] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.155347268s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-087934 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh -n functional-087934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 cp functional-087934:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2539309534/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh -n functional-087934 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-087934 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-92xbv" [0ddd30b7-c080-43f5-a1dc-8c90e51adca5] Pending
helpers_test.go:344: "mysql-859648c796-92xbv" [0ddd30b7-c080-43f5-a1dc-8c90e51adca5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-92xbv" [0ddd30b7-c080-43f5-a1dc-8c90e51adca5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.046173752s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;": exit status 1 (249.667605ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;": exit status 1 (215.644849ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;": exit status 1 (148.834631ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2023/11/27 11:14:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-087934 exec mysql-859648c796-92xbv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/341079/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /etc/test/nested/copy/341079/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/341079.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /etc/ssl/certs/341079.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/341079.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /usr/share/ca-certificates/341079.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3410792.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /etc/ssl/certs/3410792.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3410792.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /usr/share/ca-certificates/3410792.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-087934 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "sudo systemctl is-active docker": exit status 1 (234.367013ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "sudo systemctl is-active crio": exit status 1 (282.786295ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-087934 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-087934 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bt6p4" [ad1d8530-00d8-4bc3-bdba-119f09018fe4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bt6p4" [ad1d8530-00d8-4bc3-bdba-119f09018fe4] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.028750743s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 version -o=json --components: (1.417405495s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-087934 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-087934
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-087934
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-087934 image ls --format short --alsologtostderr:
I1127 11:14:30.427677  348510 out.go:296] Setting OutFile to fd 1 ...
I1127 11:14:30.428013  348510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:30.428025  348510 out.go:309] Setting ErrFile to fd 2...
I1127 11:14:30.428033  348510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:30.428361  348510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
I1127 11:14:30.429212  348510 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:30.429376  348510 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:30.430015  348510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:30.430076  348510 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:30.445777  348510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
I1127 11:14:30.446298  348510 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:30.446927  348510 main.go:141] libmachine: Using API Version  1
I1127 11:14:30.446961  348510 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:30.447319  348510 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:30.447510  348510 main.go:141] libmachine: (functional-087934) Calling .GetState
I1127 11:14:30.449482  348510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:30.449530  348510 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:30.464334  348510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
I1127 11:14:30.464784  348510 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:30.465275  348510 main.go:141] libmachine: Using API Version  1
I1127 11:14:30.465296  348510 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:30.465610  348510 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:30.465806  348510 main.go:141] libmachine: (functional-087934) Calling .DriverName
I1127 11:14:30.466017  348510 ssh_runner.go:195] Run: systemctl --version
I1127 11:14:30.466047  348510 main.go:141] libmachine: (functional-087934) Calling .GetSSHHostname
I1127 11:14:30.468770  348510 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:30.469179  348510 main.go:141] libmachine: (functional-087934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:61:b5", ip: ""} in network mk-functional-087934: {Iface:virbr1 ExpiryTime:2023-11-27 12:11:43 +0000 UTC Type:0 Mac:52:54:00:b3:61:b5 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-087934 Clientid:01:52:54:00:b3:61:b5}
I1127 11:14:30.469232  348510 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined IP address 192.168.39.214 and MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:30.469295  348510 main.go:141] libmachine: (functional-087934) Calling .GetSSHPort
I1127 11:14:30.469496  348510 main.go:141] libmachine: (functional-087934) Calling .GetSSHKeyPath
I1127 11:14:30.469673  348510 main.go:141] libmachine: (functional-087934) Calling .GetSSHUsername
I1127 11:14:30.469825  348510 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/functional-087934/id_rsa Username:docker}
I1127 11:14:30.583225  348510 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 11:14:30.742707  348510 main.go:141] libmachine: Making call to close driver server
I1127 11:14:30.742736  348510 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:30.743082  348510 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:30.743114  348510 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:30.743127  348510 main.go:141] libmachine: Making call to close driver server
I1127 11:14:30.743138  348510 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:30.743377  348510 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:30.743393  348510 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-087934 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| gcr.io/google-containers/addon-resizer      | functional-087934  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| docker.io/library/minikube-local-cache-test | functional-087934  | sha256:26037f | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:a6bd71 | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-087934 image ls --format table --alsologtostderr:
I1127 11:14:31.099689  348569 out.go:296] Setting OutFile to fd 1 ...
I1127 11:14:31.099823  348569 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.099832  348569 out.go:309] Setting ErrFile to fd 2...
I1127 11:14:31.099837  348569 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.100055  348569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
I1127 11:14:31.100935  348569 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.101110  348569 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.101748  348569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.101843  348569 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.118098  348569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
I1127 11:14:31.118583  348569 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.119282  348569 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.119310  348569 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.119634  348569 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.119851  348569 main.go:141] libmachine: (functional-087934) Calling .GetState
I1127 11:14:31.121752  348569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.121803  348569 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.137023  348569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
I1127 11:14:31.137579  348569 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.138043  348569 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.138070  348569 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.138414  348569 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.138573  348569 main.go:141] libmachine: (functional-087934) Calling .DriverName
I1127 11:14:31.138793  348569 ssh_runner.go:195] Run: systemctl --version
I1127 11:14:31.138822  348569 main.go:141] libmachine: (functional-087934) Calling .GetSSHHostname
I1127 11:14:31.141749  348569 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.142122  348569 main.go:141] libmachine: (functional-087934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:61:b5", ip: ""} in network mk-functional-087934: {Iface:virbr1 ExpiryTime:2023-11-27 12:11:43 +0000 UTC Type:0 Mac:52:54:00:b3:61:b5 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-087934 Clientid:01:52:54:00:b3:61:b5}
I1127 11:14:31.142159  348569 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined IP address 192.168.39.214 and MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.142334  348569 main.go:141] libmachine: (functional-087934) Calling .GetSSHPort
I1127 11:14:31.142552  348569 main.go:141] libmachine: (functional-087934) Calling .GetSSHKeyPath
I1127 11:14:31.142710  348569 main.go:141] libmachine: (functional-087934) Calling .GetSSHUsername
I1127 11:14:31.142869  348569 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/functional-087934/id_rsa Username:docker}
I1127 11:14:31.237401  348569 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 11:14:31.292099  348569 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.292126  348569 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.292479  348569 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.292501  348569 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:31.292512  348569 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.292521  348569 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.292854  348569 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.292884  348569 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:31.292936  348569 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-087934 image ls --format json --alsologtostderr:
[{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"70544635"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-087934"],"size":"10823156"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:73deb9a3f7025325
92a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size
":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:26037ffbff24d2e333a05286e27261e1bc51e80dcd6a1bea6674f2841d9d8357","repoDigests":[],"repoTags":["docker.io/library/minikube-loca
l-cache-test:functional-087934"],"size":"1007"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"}]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-087934 image ls --format json --alsologtostderr:
I1127 11:14:30.826452  348533 out.go:296] Setting OutFile to fd 1 ...
I1127 11:14:30.826656  348533 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:30.826669  348533 out.go:309] Setting ErrFile to fd 2...
I1127 11:14:30.826676  348533 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:30.827024  348533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
I1127 11:14:30.827897  348533 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:30.828083  348533 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:30.828679  348533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:30.828753  348533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:30.843811  348533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
I1127 11:14:30.844387  348533 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:30.845054  348533 main.go:141] libmachine: Using API Version  1
I1127 11:14:30.845088  348533 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:30.845428  348533 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:30.845666  348533 main.go:141] libmachine: (functional-087934) Calling .GetState
I1127 11:14:30.847975  348533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:30.848034  348533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:30.863546  348533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
I1127 11:14:30.864044  348533 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:30.864534  348533 main.go:141] libmachine: Using API Version  1
I1127 11:14:30.864557  348533 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:30.864916  348533 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:30.865148  348533 main.go:141] libmachine: (functional-087934) Calling .DriverName
I1127 11:14:30.865384  348533 ssh_runner.go:195] Run: systemctl --version
I1127 11:14:30.865420  348533 main.go:141] libmachine: (functional-087934) Calling .GetSSHHostname
I1127 11:14:30.868233  348533 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:30.868742  348533 main.go:141] libmachine: (functional-087934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:61:b5", ip: ""} in network mk-functional-087934: {Iface:virbr1 ExpiryTime:2023-11-27 12:11:43 +0000 UTC Type:0 Mac:52:54:00:b3:61:b5 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-087934 Clientid:01:52:54:00:b3:61:b5}
I1127 11:14:30.868775  348533 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined IP address 192.168.39.214 and MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:30.869013  348533 main.go:141] libmachine: (functional-087934) Calling .GetSSHPort
I1127 11:14:30.869197  348533 main.go:141] libmachine: (functional-087934) Calling .GetSSHKeyPath
I1127 11:14:30.869353  348533 main.go:141] libmachine: (functional-087934) Calling .GetSSHUsername
I1127 11:14:30.869493  348533 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/functional-087934/id_rsa Username:docker}
I1127 11:14:30.969081  348533 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 11:14:31.018600  348533 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.018618  348533 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.019000  348533 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.019046  348533 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:31.019063  348533 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.019066  348533 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
I1127 11:14:31.019075  348533 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.019346  348533 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.019386  348533 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
I1127 11:14:31.019407  348533 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-087934 image ls --format yaml --alsologtostderr:
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-087934
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:26037ffbff24d2e333a05286e27261e1bc51e80dcd6a1bea6674f2841d9d8357
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-087934
size: "1007"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
repoTags:
- docker.io/library/nginx:latest
size: "70544635"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-087934 image ls --format yaml --alsologtostderr:
I1127 11:14:31.356225  348592 out.go:296] Setting OutFile to fd 1 ...
I1127 11:14:31.356533  348592 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.356544  348592 out.go:309] Setting ErrFile to fd 2...
I1127 11:14:31.356551  348592 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.356768  348592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
I1127 11:14:31.357445  348592 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.357584  348592 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.358001  348592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.358071  348592 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.373017  348592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
I1127 11:14:31.373449  348592 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.374087  348592 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.374120  348592 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.374563  348592 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.374773  348592 main.go:141] libmachine: (functional-087934) Calling .GetState
I1127 11:14:31.376677  348592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.376725  348592 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.391658  348592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
I1127 11:14:31.392081  348592 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.392569  348592 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.392602  348592 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.393053  348592 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.393294  348592 main.go:141] libmachine: (functional-087934) Calling .DriverName
I1127 11:14:31.393550  348592 ssh_runner.go:195] Run: systemctl --version
I1127 11:14:31.393593  348592 main.go:141] libmachine: (functional-087934) Calling .GetSSHHostname
I1127 11:14:31.396294  348592 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.396721  348592 main.go:141] libmachine: (functional-087934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:61:b5", ip: ""} in network mk-functional-087934: {Iface:virbr1 ExpiryTime:2023-11-27 12:11:43 +0000 UTC Type:0 Mac:52:54:00:b3:61:b5 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-087934 Clientid:01:52:54:00:b3:61:b5}
I1127 11:14:31.396757  348592 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined IP address 192.168.39.214 and MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.396880  348592 main.go:141] libmachine: (functional-087934) Calling .GetSSHPort
I1127 11:14:31.397057  348592 main.go:141] libmachine: (functional-087934) Calling .GetSSHKeyPath
I1127 11:14:31.397248  348592 main.go:141] libmachine: (functional-087934) Calling .GetSSHUsername
I1127 11:14:31.397401  348592 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/functional-087934/id_rsa Username:docker}
I1127 11:14:31.489416  348592 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 11:14:31.557598  348592 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.557619  348592 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.557985  348592 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.558051  348592 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
I1127 11:14:31.558053  348592 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:31.558082  348592 main.go:141] libmachine: Making call to close driver server
I1127 11:14:31.558093  348592 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:31.558326  348592 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:31.558349  348592 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh pgrep buildkitd: exit status 1 (219.83342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image build -t localhost/my-image:functional-087934 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image build -t localhost/my-image:functional-087934 testdata/build --alsologtostderr: (3.783506173s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-087934 image build -t localhost/my-image:functional-087934 testdata/build --alsologtostderr:
I1127 11:14:31.841832  348646 out.go:296] Setting OutFile to fd 1 ...
I1127 11:14:31.842137  348646 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.842148  348646 out.go:309] Setting ErrFile to fd 2...
I1127 11:14:31.842153  348646 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:14:31.842377  348646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
I1127 11:14:31.842971  348646 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.843573  348646 config.go:182] Loaded profile config "functional-087934": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1127 11:14:31.844020  348646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.844086  348646 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.858907  348646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
I1127 11:14:31.859368  348646 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.859929  348646 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.859950  348646 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.860354  348646 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.860603  348646 main.go:141] libmachine: (functional-087934) Calling .GetState
I1127 11:14:31.862649  348646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1127 11:14:31.862695  348646 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:14:31.877284  348646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
I1127 11:14:31.877694  348646 main.go:141] libmachine: () Calling .GetVersion
I1127 11:14:31.878163  348646 main.go:141] libmachine: Using API Version  1
I1127 11:14:31.878192  348646 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:14:31.878508  348646 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:14:31.878736  348646 main.go:141] libmachine: (functional-087934) Calling .DriverName
I1127 11:14:31.878969  348646 ssh_runner.go:195] Run: systemctl --version
I1127 11:14:31.878997  348646 main.go:141] libmachine: (functional-087934) Calling .GetSSHHostname
I1127 11:14:31.882133  348646 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.882573  348646 main.go:141] libmachine: (functional-087934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:61:b5", ip: ""} in network mk-functional-087934: {Iface:virbr1 ExpiryTime:2023-11-27 12:11:43 +0000 UTC Type:0 Mac:52:54:00:b3:61:b5 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-087934 Clientid:01:52:54:00:b3:61:b5}
I1127 11:14:31.882638  348646 main.go:141] libmachine: (functional-087934) DBG | domain functional-087934 has defined IP address 192.168.39.214 and MAC address 52:54:00:b3:61:b5 in network mk-functional-087934
I1127 11:14:31.882782  348646 main.go:141] libmachine: (functional-087934) Calling .GetSSHPort
I1127 11:14:31.882947  348646 main.go:141] libmachine: (functional-087934) Calling .GetSSHKeyPath
I1127 11:14:31.883123  348646 main.go:141] libmachine: (functional-087934) Calling .GetSSHUsername
I1127 11:14:31.883285  348646 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/functional-087934/id_rsa Username:docker}
I1127 11:14:31.977812  348646 build_images.go:151] Building image from path: /tmp/build.470303294.tar
I1127 11:14:31.977898  348646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 11:14:31.988184  348646 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.470303294.tar
I1127 11:14:31.993207  348646 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.470303294.tar: stat -c "%s %y" /var/lib/minikube/build/build.470303294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.470303294.tar': No such file or directory
I1127 11:14:31.993239  348646 ssh_runner.go:362] scp /tmp/build.470303294.tar --> /var/lib/minikube/build/build.470303294.tar (3072 bytes)
I1127 11:14:32.032715  348646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.470303294
I1127 11:14:32.042409  348646 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.470303294 -xf /var/lib/minikube/build/build.470303294.tar
I1127 11:14:32.050972  348646 containerd.go:378] Building image: /var/lib/minikube/build/build.470303294
I1127 11:14:32.051088  348646 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.470303294 --local dockerfile=/var/lib/minikube/build/build.470303294 --output type=image,name=localhost/my-image:functional-087934
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:a429bc2cd722cd55ba5558dcf3d48788c662a585c560fa7f3e352f6f5efdddd1 0.0s done
#8 exporting config sha256:18c3b942289143bf7277ac607b39501fd4f8015c35052cae367f50f6c5334600 0.0s done
#8 naming to localhost/my-image:functional-087934 done
#8 DONE 0.2s
I1127 11:14:35.535571  348646 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.470303294 --local dockerfile=/var/lib/minikube/build/build.470303294 --output type=image,name=localhost/my-image:functional-087934: (3.484438701s)
I1127 11:14:35.535654  348646 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.470303294
I1127 11:14:35.550496  348646 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.470303294.tar
I1127 11:14:35.560598  348646 build_images.go:207] Built localhost/my-image:functional-087934 from /tmp/build.470303294.tar
I1127 11:14:35.560635  348646 build_images.go:123] succeeded building to: functional-087934
I1127 11:14:35.560640  348646 build_images.go:124] failed building to: 
I1127 11:14:35.560664  348646 main.go:141] libmachine: Making call to close driver server
I1127 11:14:35.560681  348646 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:35.561005  348646 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:35.561030  348646 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:14:35.561041  348646 main.go:141] libmachine: Making call to close driver server
I1127 11:14:35.561051  348646 main.go:141] libmachine: (functional-087934) Calling .Close
I1127 11:14:35.561051  348646 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
I1127 11:14:35.561307  348646 main.go:141] libmachine: (functional-087934) DBG | Closing plugin on server side
I1127 11:14:35.561342  348646 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:14:35.561382  348646 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-087934
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr: (3.901768402s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr: (4.692328559s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service list -o json
functional_test.go:1493: Took "277.634891ms" to run "out/minikube-linux-amd64 -p functional-087934 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.214:30895
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.214:30895
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.2940582s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-087934
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image load --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr: (4.930509535s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdany-port1624995913/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701083656251936354" to /tmp/TestFunctionalparallelMountCmdany-port1624995913/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701083656251936354" to /tmp/TestFunctionalparallelMountCmdany-port1624995913/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701083656251936354" to /tmp/TestFunctionalparallelMountCmdany-port1624995913/001/test-1701083656251936354
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.408742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 11:14 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 11:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 11:14 test-1701083656251936354
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh cat /mount-9p/test-1701083656251936354
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-087934 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8967c0df-f66d-47ab-a310-498464c857fe] Pending
helpers_test.go:344: "busybox-mount" [8967c0df-f66d-47ab-a310-498464c857fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8967c0df-f66d-47ab-a310-498464c857fe] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8967c0df-f66d-47ab-a310-498464c857fe] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.022090794s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-087934 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdany-port1624995913/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "270.393593ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "85.343021ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "296.567838ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "72.087362ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image save gcr.io/google-containers/addon-resizer:functional-087934 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image save gcr.io/google-containers/addon-resizer:functional-087934 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.940960347s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdspecific-port3714502426/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.447919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdspecific-port3714502426/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "sudo umount -f /mount-9p": exit status 1 (226.915757ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-087934 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdspecific-port3714502426/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image rm gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.146821051s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T" /mount1: exit status 1 (378.182257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-087934 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-087934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937128282/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-087934
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-087934 image save --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-087934 image save --daemon gcr.io/google-containers/addon-resizer:functional-087934 --alsologtostderr: (1.808804049s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-087934
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-087934
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-087934
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-087934
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (124.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-638975 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1127 11:14:57.303834  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-638975 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m4.230130574s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (124.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons enable ingress --alsologtostderr -v=5: (10.955885988s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-638975 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1127 11:17:13.459803  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-638975 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.197278968s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-638975 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-638975 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [10e12a47-06b7-49dc-b741-f3503f14abca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [10e12a47-06b7-49dc-b741-f3503f14abca] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.012903823s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-638975 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.169
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons disable ingress-dns --alsologtostderr -v=1: (2.623542097s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons disable ingress --alsologtostderr -v=1
E1127 11:17:41.144985  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-638975 addons disable ingress --alsologtostderr -v=1: (7.54583514s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (118.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-681138 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1127 11:19:03.458244  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.463545  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.473825  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.494195  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.534603  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.614952  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:03.775389  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:04.096097  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:04.736996  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:06.017257  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:08.578753  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:13.699445  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:19:23.939854  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-681138 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m58.784425794s)
--- PASS: TestJSONOutput/start/Command (118.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-681138 --output=json --user=testUser
E1127 11:19:44.420535  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-681138 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-681138 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-681138 --output=json --user=testUser: (7.110367791s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-891332 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-891332 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.650071ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"677caa6b-c2ba-4d74-ade0-c0ff5d6d2061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-891332] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc0f203f-fb7d-487b-9596-a8751cba3eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17644"}}
	{"specversion":"1.0","id":"e254635f-e818-4a2d-9370-62a1d5de09a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e90f8a6b-146e-4f8d-a674-f2463902051b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig"}}
	{"specversion":"1.0","id":"899d44ba-1a8b-49db-9c04-c232c171b705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube"}}
	{"specversion":"1.0","id":"1a502b30-a4d5-4d0c-9e45-87e3fedb1f05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"375f73de-b672-49a8-a250-67d4e43fbe4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9dacaef7-e0b3-4bb1-9d8c-d8f3c6a00d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-891332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-891332
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (128.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-527767 --driver=kvm2  --container-runtime=containerd
E1127 11:20:25.380885  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-527767 --driver=kvm2  --container-runtime=containerd: (1m3.762787083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-530084 --driver=kvm2  --container-runtime=containerd
E1127 11:21:47.301488  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-530084 --driver=kvm2  --container-runtime=containerd: (1m1.22930551s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-527767
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-530084
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-530084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-530084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-530084: (1.069291685s)
helpers_test.go:175: Cleaning up "first-527767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-527767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-527767: (1.114255204s)
--- PASS: TestMinikubeProfile (128.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-148748 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1127 11:22:06.526359  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.531630  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.541929  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.562229  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.602530  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.682971  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:06.843392  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:07.164005  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:07.805007  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:09.085438  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:11.650564  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:13.459693  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:22:16.771122  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:22:27.011342  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-148748 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.602542381s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-148748 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-148748 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169565 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1127 11:22:47.491654  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169565 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.239317562s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-148748 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.14s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-169565
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-169565: (1.142723572s)
--- PASS: TestMountStart/serial/Stop (1.14s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169565
E1127 11:23:28.453738  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169565: (26.803161622s)
--- PASS: TestMountStart/serial/RestartStopped (27.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169565 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708020 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1127 11:24:03.457102  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:24:31.142030  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:24:50.374662  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708020 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m10.813398409s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-708020 -- rollout status deployment/busybox: (2.385085846s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-bkqcs -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-vrb76 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-bkqcs -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-vrb76 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-bkqcs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-vrb76 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-bkqcs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-bkqcs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-vrb76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708020 -- exec busybox-5bc68d56bd-vrb76 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-708020 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-708020 -v 3 --alsologtostderr: (39.976351632s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.59s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp testdata/cp-test.txt multinode-708020:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4065199401/001/cp-test_multinode-708020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020:/home/docker/cp-test.txt multinode-708020-m02:/home/docker/cp-test_multinode-708020_multinode-708020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test_multinode-708020_multinode-708020-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020:/home/docker/cp-test.txt multinode-708020-m03:/home/docker/cp-test_multinode-708020_multinode-708020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test_multinode-708020_multinode-708020-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp testdata/cp-test.txt multinode-708020-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4065199401/001/cp-test_multinode-708020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m02:/home/docker/cp-test.txt multinode-708020:/home/docker/cp-test_multinode-708020-m02_multinode-708020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test_multinode-708020-m02_multinode-708020.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m02:/home/docker/cp-test.txt multinode-708020-m03:/home/docker/cp-test_multinode-708020-m02_multinode-708020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test_multinode-708020-m02_multinode-708020-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp testdata/cp-test.txt multinode-708020-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4065199401/001/cp-test_multinode-708020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m03:/home/docker/cp-test.txt multinode-708020:/home/docker/cp-test_multinode-708020-m03_multinode-708020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020 "sudo cat /home/docker/cp-test_multinode-708020-m03_multinode-708020.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 cp multinode-708020-m03:/home/docker/cp-test.txt multinode-708020-m02:/home/docker/cp-test_multinode-708020-m03_multinode-708020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 ssh -n multinode-708020-m02 "sudo cat /home/docker/cp-test_multinode-708020-m03_multinode-708020-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-708020 node stop m03: (1.250395103s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708020 status: exit status 7 (451.136216ms)

                                                
                                                
-- stdout --
	multinode-708020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr: exit status 7 (467.948478ms)

                                                
                                                
-- stdout --
	multinode-708020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:26:42.359878  355822 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:26:42.360030  355822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:42.360044  355822 out.go:309] Setting ErrFile to fd 2...
	I1127 11:26:42.360051  355822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:42.360275  355822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:26:42.360452  355822 out.go:303] Setting JSON to false
	I1127 11:26:42.360493  355822 mustload.go:65] Loading cluster: multinode-708020
	I1127 11:26:42.360535  355822 notify.go:220] Checking for updates...
	I1127 11:26:42.361070  355822 config.go:182] Loaded profile config "multinode-708020": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:26:42.361093  355822 status.go:255] checking status of multinode-708020 ...
	I1127 11:26:42.361592  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.361669  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.389301  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I1127 11:26:42.389782  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.390296  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.390319  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.390667  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.390840  355822 main.go:141] libmachine: (multinode-708020) Calling .GetState
	I1127 11:26:42.392440  355822 status.go:330] multinode-708020 host status = "Running" (err=<nil>)
	I1127 11:26:42.392456  355822 host.go:66] Checking if "multinode-708020" exists ...
	I1127 11:26:42.392747  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.392793  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.407577  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1127 11:26:42.407963  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.408404  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.408426  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.408705  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.408890  355822 main.go:141] libmachine: (multinode-708020) Calling .GetIP
	I1127 11:26:42.411416  355822 main.go:141] libmachine: (multinode-708020) DBG | domain multinode-708020 has defined MAC address 52:54:00:8e:2d:27 in network mk-multinode-708020
	I1127 11:26:42.411835  355822 main.go:141] libmachine: (multinode-708020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:2d:27", ip: ""} in network mk-multinode-708020: {Iface:virbr1 ExpiryTime:2023-11-27 12:23:51 +0000 UTC Type:0 Mac:52:54:00:8e:2d:27 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-708020 Clientid:01:52:54:00:8e:2d:27}
	I1127 11:26:42.411871  355822 main.go:141] libmachine: (multinode-708020) DBG | domain multinode-708020 has defined IP address 192.168.39.50 and MAC address 52:54:00:8e:2d:27 in network mk-multinode-708020
	I1127 11:26:42.411997  355822 host.go:66] Checking if "multinode-708020" exists ...
	I1127 11:26:42.412380  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.412428  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.427410  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I1127 11:26:42.427799  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.428214  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.428233  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.428561  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.428750  355822 main.go:141] libmachine: (multinode-708020) Calling .DriverName
	I1127 11:26:42.428913  355822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:26:42.428943  355822 main.go:141] libmachine: (multinode-708020) Calling .GetSSHHostname
	I1127 11:26:42.431650  355822 main.go:141] libmachine: (multinode-708020) DBG | domain multinode-708020 has defined MAC address 52:54:00:8e:2d:27 in network mk-multinode-708020
	I1127 11:26:42.432075  355822 main.go:141] libmachine: (multinode-708020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:2d:27", ip: ""} in network mk-multinode-708020: {Iface:virbr1 ExpiryTime:2023-11-27 12:23:51 +0000 UTC Type:0 Mac:52:54:00:8e:2d:27 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-708020 Clientid:01:52:54:00:8e:2d:27}
	I1127 11:26:42.432117  355822 main.go:141] libmachine: (multinode-708020) DBG | domain multinode-708020 has defined IP address 192.168.39.50 and MAC address 52:54:00:8e:2d:27 in network mk-multinode-708020
	I1127 11:26:42.432282  355822 main.go:141] libmachine: (multinode-708020) Calling .GetSSHPort
	I1127 11:26:42.432467  355822 main.go:141] libmachine: (multinode-708020) Calling .GetSSHKeyPath
	I1127 11:26:42.432630  355822 main.go:141] libmachine: (multinode-708020) Calling .GetSSHUsername
	I1127 11:26:42.432799  355822 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/multinode-708020/id_rsa Username:docker}
	I1127 11:26:42.526209  355822 ssh_runner.go:195] Run: systemctl --version
	I1127 11:26:42.531845  355822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:26:42.546024  355822 kubeconfig.go:92] found "multinode-708020" server: "https://192.168.39.50:8443"
	I1127 11:26:42.546129  355822 api_server.go:166] Checking apiserver status ...
	I1127 11:26:42.546183  355822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:26:42.559151  355822 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1279/cgroup
	I1127 11:26:42.570598  355822 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod62cafb9e4c9ea70cfb0bb236db2dc749/23a4d54a3b2f3cf1d859372ed60fe5d25f86f75314e8a834da5837d3ed375a6d"
	I1127 11:26:42.570672  355822 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod62cafb9e4c9ea70cfb0bb236db2dc749/23a4d54a3b2f3cf1d859372ed60fe5d25f86f75314e8a834da5837d3ed375a6d/freezer.state
	I1127 11:26:42.579934  355822 api_server.go:204] freezer state: "THAWED"
	I1127 11:26:42.579972  355822 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1127 11:26:42.586036  355822 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1127 11:26:42.586066  355822 status.go:421] multinode-708020 apiserver status = Running (err=<nil>)
	I1127 11:26:42.586079  355822 status.go:257] multinode-708020 status: &{Name:multinode-708020 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:26:42.586101  355822 status.go:255] checking status of multinode-708020-m02 ...
	I1127 11:26:42.586547  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.586627  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.602186  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I1127 11:26:42.602626  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.603200  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.603226  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.603608  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.603824  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetState
	I1127 11:26:42.605349  355822 status.go:330] multinode-708020-m02 host status = "Running" (err=<nil>)
	I1127 11:26:42.605366  355822 host.go:66] Checking if "multinode-708020-m02" exists ...
	I1127 11:26:42.605649  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.605681  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.621031  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
	I1127 11:26:42.621439  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.621867  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.621889  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.622218  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.622375  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetIP
	I1127 11:26:42.624809  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | domain multinode-708020-m02 has defined MAC address 52:54:00:29:45:9f in network mk-multinode-708020
	I1127 11:26:42.625218  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:45:9f", ip: ""} in network mk-multinode-708020: {Iface:virbr1 ExpiryTime:2023-11-27 12:25:16 +0000 UTC Type:0 Mac:52:54:00:29:45:9f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-708020-m02 Clientid:01:52:54:00:29:45:9f}
	I1127 11:26:42.625248  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | domain multinode-708020-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:29:45:9f in network mk-multinode-708020
	I1127 11:26:42.625409  355822 host.go:66] Checking if "multinode-708020-m02" exists ...
	I1127 11:26:42.625695  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.625753  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.640024  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1127 11:26:42.640387  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.640805  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.640832  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.641112  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.641303  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .DriverName
	I1127 11:26:42.641438  355822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:26:42.641460  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetSSHHostname
	I1127 11:26:42.643792  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | domain multinode-708020-m02 has defined MAC address 52:54:00:29:45:9f in network mk-multinode-708020
	I1127 11:26:42.644172  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:45:9f", ip: ""} in network mk-multinode-708020: {Iface:virbr1 ExpiryTime:2023-11-27 12:25:16 +0000 UTC Type:0 Mac:52:54:00:29:45:9f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-708020-m02 Clientid:01:52:54:00:29:45:9f}
	I1127 11:26:42.644202  355822 main.go:141] libmachine: (multinode-708020-m02) DBG | domain multinode-708020-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:29:45:9f in network mk-multinode-708020
	I1127 11:26:42.644373  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetSSHPort
	I1127 11:26:42.644534  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetSSHKeyPath
	I1127 11:26:42.644647  355822 main.go:141] libmachine: (multinode-708020-m02) Calling .GetSSHUsername
	I1127 11:26:42.644793  355822 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-333834/.minikube/machines/multinode-708020-m02/id_rsa Username:docker}
	I1127 11:26:42.738492  355822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:26:42.751028  355822 status.go:257] multinode-708020-m02 status: &{Name:multinode-708020-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:26:42.751069  355822 status.go:255] checking status of multinode-708020-m03 ...
	I1127 11:26:42.751367  355822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:26:42.751426  355822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:26:42.766287  355822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I1127 11:26:42.766742  355822 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:26:42.767218  355822 main.go:141] libmachine: Using API Version  1
	I1127 11:26:42.767247  355822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:26:42.767687  355822 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:26:42.767924  355822 main.go:141] libmachine: (multinode-708020-m03) Calling .GetState
	I1127 11:26:42.769380  355822 status.go:330] multinode-708020-m03 host status = "Stopped" (err=<nil>)
	I1127 11:26:42.769395  355822 status.go:343] host is not running, skipping remaining checks
	I1127 11:26:42.769403  355822 status.go:257] multinode-708020-m03 status: &{Name:multinode-708020-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 node start m03 --alsologtostderr
E1127 11:27:06.525373  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-708020 node start m03 --alsologtostderr: (27.42374036s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708020
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-708020
E1127 11:27:13.459635  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:27:34.215781  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:28:36.506998  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 11:29:03.458717  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-708020: (3m4.258923184s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708020 --wait=true -v=8 --alsologtostderr
E1127 11:32:06.526559  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:32:13.458933  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708020 --wait=true -v=8 --alsologtostderr: (2m8.149655164s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708020
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-708020 node delete m03: (1.237891035s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 stop
E1127 11:34:03.459583  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:35:26.504416  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-708020 stop: (3m3.284468043s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708020 status: exit status 7 (104.332569ms)

                                                
                                                
-- stdout --
	multinode-708020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr: exit status 7 (97.103293ms)

                                                
                                                
-- stdout --
	multinode-708020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:35:28.664784  357997 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:35:28.664938  357997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:35:28.664948  357997 out.go:309] Setting ErrFile to fd 2...
	I1127 11:35:28.664955  357997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:35:28.665141  357997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:35:28.665345  357997 out.go:303] Setting JSON to false
	I1127 11:35:28.665395  357997 mustload.go:65] Loading cluster: multinode-708020
	I1127 11:35:28.665506  357997 notify.go:220] Checking for updates...
	I1127 11:35:28.665836  357997 config.go:182] Loaded profile config "multinode-708020": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:35:28.665855  357997 status.go:255] checking status of multinode-708020 ...
	I1127 11:35:28.666249  357997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:35:28.666317  357997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:35:28.680789  357997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I1127 11:35:28.681261  357997 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:35:28.681868  357997 main.go:141] libmachine: Using API Version  1
	I1127 11:35:28.681894  357997 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:35:28.682226  357997 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:35:28.682405  357997 main.go:141] libmachine: (multinode-708020) Calling .GetState
	I1127 11:35:28.684008  357997 status.go:330] multinode-708020 host status = "Stopped" (err=<nil>)
	I1127 11:35:28.684028  357997 status.go:343] host is not running, skipping remaining checks
	I1127 11:35:28.684036  357997 status.go:257] multinode-708020 status: &{Name:multinode-708020 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:35:28.684061  357997 status.go:255] checking status of multinode-708020-m02 ...
	I1127 11:35:28.684473  357997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1127 11:35:28.684526  357997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:35:28.699200  357997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I1127 11:35:28.699566  357997 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:35:28.699979  357997 main.go:141] libmachine: Using API Version  1
	I1127 11:35:28.700002  357997 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:35:28.700290  357997 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:35:28.700475  357997 main.go:141] libmachine: (multinode-708020-m02) Calling .GetState
	I1127 11:35:28.702223  357997 status.go:330] multinode-708020-m02 host status = "Stopped" (err=<nil>)
	I1127 11:35:28.702239  357997 status.go:343] host is not running, skipping remaining checks
	I1127 11:35:28.702244  357997 status.go:257] multinode-708020-m02 status: &{Name:multinode-708020-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708020 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708020 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m33.453471747s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708020 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (67.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708020
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708020-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-708020-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (86.38784ms)

                                                
                                                
-- stdout --
	* [multinode-708020-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-708020-m02' is duplicated with machine name 'multinode-708020-m02' in profile 'multinode-708020'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708020-m03 --driver=kvm2  --container-runtime=containerd
E1127 11:37:06.526864  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:37:13.458902  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708020-m03 --driver=kvm2  --container-runtime=containerd: (1m5.960731775s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-708020
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-708020: exit status 80 (256.33939ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-708020
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-708020-m03 already exists in multinode-708020-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-708020-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (67.29s)

                                                
                                    
x
+
TestPreload (243.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769546 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1127 11:38:29.576791  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:39:03.457650  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769546 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m27.525533011s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769546 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-769546
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-769546: (1m31.767532651s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769546 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1127 11:42:06.526176  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:42:13.459164  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769546 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m2.632455354s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769546 image list
helpers_test.go:175: Cleaning up "test-preload-769546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-769546
--- PASS: TestPreload (243.92s)

                                                
                                    
x
+
TestScheduledStopUnix (140.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-852004 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-852004 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m8.267501631s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-852004 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-852004 -n scheduled-stop-852004
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-852004 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-852004 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-852004 -n scheduled-stop-852004
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-852004
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-852004 --schedule 15s
E1127 11:44:03.458746  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-852004
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-852004: exit status 7 (89.446061ms)

                                                
                                                
-- stdout --
	scheduled-stop-852004
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-852004 -n scheduled-stop-852004
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-852004 -n scheduled-stop-852004: exit status 7 (87.037416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-852004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-852004
--- PASS: TestScheduledStopUnix (140.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (179.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.3018432547.exe start -p running-upgrade-288314 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E1127 11:47:06.525137  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.3018432547.exe start -p running-upgrade-288314 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m26.566836262s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-288314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-288314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.08805478s)
helpers_test.go:175: Cleaning up "running-upgrade-288314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-288314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-288314: (1.209817614s)
--- PASS: TestRunningBinaryUpgrade (179.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (189.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m34.70418559s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-701559
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-701559: (4.12822341s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-701559 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-701559 status --format={{.Host}}: exit status 7 (98.589645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m6.970150687s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-701559 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (140.217662ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-701559] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.4 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-701559
	    minikube start -p kubernetes-upgrade-701559 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7015592 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.4, by running:
	    
	    minikube start -p kubernetes-upgrade-701559 --kubernetes-version=v1.28.4
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701559 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (22.031219193s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-701559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-701559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-701559: (1.432379785s)
--- PASS: TestKubernetesUpgrade (189.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (117.964197ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-276762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (151.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-276762 --driver=kvm2  --container-runtime=containerd
E1127 11:45:16.507761  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-276762 --driver=kvm2  --container-runtime=containerd: (2m31.056804653s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-276762 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (151.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1127 11:47:13.459824  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (49.501193541s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-276762 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-276762 status -o json: exit status 2 (282.400478ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-276762","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-276762
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-276762: (1.211412101s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-276762 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.10969664s)
--- PASS: TestNoKubernetes/serial/Start (30.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-276762 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-276762 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.581508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-276762
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-276762: (12.282291112s)
--- PASS: TestNoKubernetes/serial/Stop (12.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (36.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-276762 --driver=kvm2  --container-runtime=containerd
E1127 11:49:03.457790  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-276762 --driver=kvm2  --container-runtime=containerd: (36.263492129s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (36.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-276762 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-276762 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.082976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2380770350.exe start -p stopped-upgrade-158401 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2380770350.exe start -p stopped-upgrade-158401 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m1.690257962s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2380770350.exe -p stopped-upgrade-158401 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2380770350.exe -p stopped-upgrade-158401 stop: (2.149091038s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-158401 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-158401 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.086270114s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-468067 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-468067 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (134.706126ms)

                                                
                                                
-- stdout --
	* [false-468067] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:49:48.894111  365945 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:49:48.894431  365945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:48.894442  365945 out.go:309] Setting ErrFile to fd 2...
	I1127 11:49:48.894450  365945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:48.894692  365945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-333834/.minikube/bin
	I1127 11:49:48.895368  365945 out.go:303] Setting JSON to false
	I1127 11:49:48.896492  365945 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9140,"bootTime":1701076649,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:49:48.896568  365945 start.go:138] virtualization: kvm guest
	I1127 11:49:48.899095  365945 out.go:177] * [false-468067] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:49:48.901262  365945 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:49:48.901221  365945 notify.go:220] Checking for updates...
	I1127 11:49:48.902948  365945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:49:48.904656  365945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-333834/kubeconfig
	I1127 11:49:48.906350  365945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-333834/.minikube
	I1127 11:49:48.907855  365945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:49:48.909327  365945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:49:48.911249  365945 config.go:182] Loaded profile config "cert-expiration-349610": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:49:48.911383  365945 config.go:182] Loaded profile config "kubernetes-upgrade-701559": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1127 11:49:48.911486  365945 config.go:182] Loaded profile config "stopped-upgrade-158401": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1127 11:49:48.911593  365945 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:49:48.953677  365945 out.go:177] * Using the kvm2 driver based on user configuration
	I1127 11:49:48.955092  365945 start.go:298] selected driver: kvm2
	I1127 11:49:48.955118  365945 start.go:902] validating driver "kvm2" against <nil>
	I1127 11:49:48.955132  365945 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:49:48.957301  365945 out.go:177] 
	W1127 11:49:48.958749  365945 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1127 11:49:48.960004  365945 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-468067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.83.236:8443
name: cert-expiration-349610
contexts:
- context:
cluster: cert-expiration-349610
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-349610
name: cert-expiration-349610
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-349610
user:
client-certificate: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.crt
client-key: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-468067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468067"

                                                
                                                
----------------------- debugLogs end: false-468067 [took: 4.627832414s] --------------------------------
helpers_test.go:175: Cleaning up "false-468067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-468067
--- PASS: TestNetworkPlugins/group/false (4.96s)

                                                
                                    
x
+
TestPause/serial/Start (108.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-855636 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-855636 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m48.604298472s)
--- PASS: TestPause/serial/Start (108.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (128.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m8.662174551s)
--- PASS: TestNetworkPlugins/group/auto/Start (128.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (140.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m20.718043743s)
--- PASS: TestNetworkPlugins/group/flannel/Start (140.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-158401
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-158401: (1.146019707s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (126.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m6.037231534s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (126.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-855636 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1127 11:52:06.505209  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 11:52:06.525510  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 11:52:13.459498  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-855636 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (54.085353105s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (54.11s)

                                                
                                    
x
+
TestPause/serial/Pause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-855636 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-855636 --alsologtostderr -v=5: (1.198130036s)
--- PASS: TestPause/serial/Pause (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-855636 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-855636 --output=json --layout=cluster: exit status 2 (353.981573ms)

                                                
                                                
-- stdout --
	{"Name":"pause-855636","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-855636","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-855636 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-855636 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-855636 --alsologtostderr -v=5: (1.122771549s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.42s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-855636 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-855636 --alsologtostderr -v=5: (1.420328213s)
--- PASS: TestPause/serial/DeletePaused (1.42s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.774850643s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (127.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m7.921321577s)
--- PASS: TestNetworkPlugins/group/bridge/Start (127.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r4q8d" [5aee275d-99d9-4409-9223-eb1b091a81cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r4q8d" [5aee275d-99d9-4409-9223-eb1b091a81cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.02002367s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5znfw" [99678a5b-8c6f-46a6-af0e-01373b0f695d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028063191s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jkjp9" [038b9fc7-37d8-46fe-8584-218227c42c4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jkjp9" [038b9fc7-37d8-46fe-8584-218227c42c4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.015974544s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v56cv" [43b2fa61-11a0-4978-b304-91071e8376e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v56cv" [43b2fa61-11a0-4978-b304-91071e8376e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.034441471s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (126.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m6.495795693s)
--- PASS: TestNetworkPlugins/group/calico/Start (126.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-468067 exec deployment/netcat -- nslookup kubernetes.default
E1127 11:54:03.457459  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-468067 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.204405195s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-468067 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-468067 exec deployment/netcat -- nslookup kubernetes.default: (10.224140949s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (115.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m55.038223426s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (115.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (132.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-468067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m12.079759625s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (132.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fxrql" [414710b5-4610-44d2-a7bd-98c2a5ae111c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fxrql" [414710b5-4610-44d2-a7bd-98c2a5ae111c] Running
E1127 11:55:09.577765  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.019613478s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-875817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-875817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m33.777658478s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pqvf4" [adbce5f6-f752-4122-b900-789c1aa23588] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.031677997s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x6lbd" [b1f1de56-2eb5-4649-a9f8-67141faf4970] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x6lbd" [b1f1de56-2eb5-4649-a9f8-67141faf4970] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.017056132s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qllml" [97edc22c-e53c-4ce6-8484-2e63ac459ef7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.037055781s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dgvd6" [f8f6a2e5-6d04-42c0-adab-36fccb709205] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dgvd6" [f8f6a2e5-6d04-42c0-adab-36fccb709205] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.020447621s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (124.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-504166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-504166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (2m4.766781191s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (124.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (157.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-855869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-855869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (2m37.900868165s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (157.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-468067 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-468067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4mrg9" [ec70c9e4-18aa-4eb5-95b9-ed46e4e6b54e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4mrg9" [ec70c9e4-18aa-4eb5-95b9-ed46e4e6b54e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.014311526s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-468067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-468067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-533587 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-533587 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m47.155233627s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-875817 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c5877c7-f196-4a43-b010-31cbe0215b52] Pending
helpers_test.go:344: "busybox" [0c5877c7-f196-4a43-b010-31cbe0215b52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c5877c7-f196-4a43-b010-31cbe0215b52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.048453317s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-875817 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-875817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-875817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.034069588s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-875817 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (93.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-875817 --alsologtostderr -v=3
E1127 11:58:21.233904  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.239323  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.249672  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.270054  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.310372  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.390851  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.551333  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:21.871832  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:22.512285  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:23.793567  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:26.353924  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:31.474756  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:33.816693  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:33.822067  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:33.832452  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:33.852897  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:33.893291  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:33.973866  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:34.134358  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:34.455256  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:35.096275  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:36.377206  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:58:38.938147  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-875817 --alsologtostderr -v=3: (1m33.027775652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (93.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-504166 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db5f4e22-1a4a-4ef3-acb8-77e7b2a672d0] Pending
E1127 11:58:41.715789  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:58:42.016421  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.021768  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.032177  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.052589  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.093034  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.173671  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
helpers_test.go:344: "busybox" [db5f4e22-1a4a-4ef3-acb8-77e7b2a672d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1127 11:58:42.334041  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:42.654814  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:43.295634  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:44.058914  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
helpers_test.go:344: "busybox" [db5f4e22-1a4a-4ef3-acb8-77e7b2a672d0] Running
E1127 11:58:44.576819  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:47.137472  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.026138471s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-504166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-504166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-504166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.231343913s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-504166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (102.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-504166 --alsologtostderr -v=3
E1127 11:58:52.258032  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:58:54.300046  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 11:59:02.196452  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 11:59:02.499191  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 11:59:03.457908  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-504166 --alsologtostderr -v=3: (1m42.0224793s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (102.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-533587 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00415119-a67d-4d69-a431-e3071ff2c901] Pending
helpers_test.go:344: "busybox" [00415119-a67d-4d69-a431-e3071ff2c901] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00415119-a67d-4d69-a431-e3071ff2c901] Running
E1127 11:59:14.780642  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.033729308s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-533587 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-533587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-533587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.248223195s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-533587 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-533587 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-533587 --alsologtostderr -v=3: (1m31.972550898s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-855869 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5be0b593-2d32-4f6d-8ab5-0f7e71322634] Pending
E1127 11:59:22.980103  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5be0b593-2d32-4f6d-8ab5-0f7e71322634] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5be0b593-2d32-4f6d-8ab5-0f7e71322634] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.043921451s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-855869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-855869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-855869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.190892387s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-855869 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-855869 --alsologtostderr -v=3
E1127 11:59:43.156649  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-855869 --alsologtostderr -v=3: (1m32.092998071s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-875817 -n old-k8s-version-875817
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-875817 -n old-k8s-version-875817: exit status 7 (87.046152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-875817 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (444.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-875817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1127 11:59:55.741189  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 12:00:02.125829  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.131265  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.141601  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.161986  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.202413  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.282840  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.443480  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:02.764377  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:03.405326  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:03.940614  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 12:00:04.686632  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:07.247228  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:12.368192  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:00:22.608424  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-875817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m23.678768699s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-875817 -n old-k8s-version-875817
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (444.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504166 -n no-preload-504166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504166 -n no-preload-504166: exit status 7 (86.822896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-504166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-504166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1127 12:00:43.089443  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-504166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m32.535769596s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504166 -n no-preload-504166
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587: exit status 7 (97.229232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-533587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (364.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-533587 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1127 12:00:58.146917  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.152260  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.162657  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.183032  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.223458  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.303946  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.464810  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:58.785480  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:00:59.425943  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:01:00.707069  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:01:03.267319  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-533587 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (6m4.456118731s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (364.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855869 -n embed-certs-855869
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855869 -n embed-certs-855869: exit status 7 (109.752659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-855869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-855869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1127 12:01:05.077718  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 12:01:06.590523  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.595928  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.606339  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.627008  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.667417  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.748355  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:06.908918  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:07.229596  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:07.870171  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:08.387534  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:01:09.151289  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:11.712185  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:16.833242  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:17.661393  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 12:01:18.628474  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:01:24.050659  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:01:25.861471  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 12:01:27.074495  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:39.108918  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:01:47.555709  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:01:52.094393  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.099772  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.110092  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.130416  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.170717  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.251187  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.411877  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:52.732518  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:53.373436  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:54.654260  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:01:56.508072  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 12:01:57.215438  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:02:02.336222  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:02:06.524918  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
E1127 12:02:12.577006  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:02:13.459567  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
E1127 12:02:20.069701  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:02:28.516984  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:02:33.058260  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:02:45.971848  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:03:14.019014  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:03:21.233396  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 12:03:33.816309  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 12:03:41.990458  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
E1127 12:03:42.015775  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 12:03:48.918620  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 12:03:50.437660  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
E1127 12:04:01.501659  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 12:04:03.457004  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
E1127 12:04:09.701928  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 12:04:35.939547  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
E1127 12:05:02.125814  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:05:29.812763  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/bridge-468067/client.crt: no such file or directory
E1127 12:05:58.146899  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-855869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m47.001484946s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855869 -n embed-certs-855869
E1127 12:06:52.095148  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sd9q6" [eead3cf5-e9d2-4059-9e31-de8cabfb0f7a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1127 12:06:06.590828  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sd9q6" [eead3cf5-e9d2-4059-9e31-de8cabfb0f7a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.036373332s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sd9q6" [eead3cf5-e9d2-4059-9e31-de8cabfb0f7a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024450786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-504166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-504166 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-504166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-504166 --alsologtostderr -v=1: (1.022639544s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504166 -n no-preload-504166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504166 -n no-preload-504166: exit status 2 (321.084122ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-504166 -n no-preload-504166
E1127 12:06:25.831255  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/calico-468067/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-504166 -n no-preload-504166: exit status 2 (326.168995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-504166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504166 -n no-preload-504166
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-504166 -n no-preload-504166
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (86.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-432858 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1127 12:06:34.278764  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/kindnet-468067/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-432858 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m26.102460732s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (86.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4vh7h" [7e0cd44d-7256-4936-97a8-3f90cee00640] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4vh7h" [7e0cd44d-7256-4936-97a8-3f90cee00640] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.026251198s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fxjx" [dcb93bc9-9fd7-45ca-9531-2c733e78d825] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1127 12:07:06.524947  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/ingress-addon-legacy-638975/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fxjx" [dcb93bc9-9fd7-45ca-9531-2c733e78d825] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.022141358s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fxjx" [dcb93bc9-9fd7-45ca-9531-2c733e78d825] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021288254s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-533587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4vh7h" [7e0cd44d-7256-4936-97a8-3f90cee00640] Running
E1127 12:07:13.459164  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/addons-824928/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014190893s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-855869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lrsbt" [4439e62b-3b52-4cb3-a953-aff7d3d67799] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.03611171s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-533587 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-533587 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587: exit status 2 (304.800671ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587: exit status 2 (341.630403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-533587 --alsologtostderr -v=1
E1127 12:07:19.781966  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/custom-flannel-468067/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533587 -n default-k8s-diff-port-533587
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-855869 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-855869 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855869 -n embed-certs-855869
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855869 -n embed-certs-855869: exit status 2 (309.349728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855869 -n embed-certs-855869
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855869 -n embed-certs-855869: exit status 2 (333.720001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-855869 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-855869 --alsologtostderr -v=1: (1.073719937s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855869 -n embed-certs-855869
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855869 -n embed-certs-855869
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lrsbt" [4439e62b-3b52-4cb3-a953-aff7d3d67799] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015223283s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-875817 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-875817 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-875817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-875817 -n old-k8s-version-875817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-875817 -n old-k8s-version-875817: exit status 2 (284.800781ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-875817 -n old-k8s-version-875817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-875817 -n old-k8s-version-875817: exit status 2 (284.316552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-875817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-875817 -n old-k8s-version-875817
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-875817 -n old-k8s-version-875817
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-432858 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-432858 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.543513047s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-432858 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-432858 --alsologtostderr -v=3: (2.118557089s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-432858 -n newest-cni-432858
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-432858 -n newest-cni-432858: exit status 7 (88.11739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-432858 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-432858 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1127 12:08:09.672229  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.677640  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.688178  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.708521  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.748976  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.829363  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:09.989844  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:10.310720  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:10.951455  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:12.232204  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:14.793145  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:19.913969  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:21.234260  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/auto-468067/client.crt: no such file or directory
E1127 12:08:30.154662  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
E1127 12:08:33.816674  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/flannel-468067/client.crt: no such file or directory
E1127 12:08:41.154513  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.159852  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.170253  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.190696  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.231073  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.311575  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.472102  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:41.792739  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:42.016355  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/enable-default-cni-468067/client.crt: no such file or directory
E1127 12:08:42.433602  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:43.713904  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:46.274968  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
E1127 12:08:46.506390  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/functional-087934/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-432858 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (49.851035999s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-432858 -n newest-cni-432858
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-432858 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-432858 --alsologtostderr -v=1
E1127 12:08:50.634912  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/old-k8s-version-875817/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-432858 -n newest-cni-432858
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-432858 -n newest-cni-432858: exit status 2 (281.069397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-432858 -n newest-cni-432858
E1127 12:08:51.396030  341079 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/no-preload-504166/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-432858 -n newest-cni-432858: exit status 2 (286.465676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-432858 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-432858 -n newest-cni-432858
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-432858 -n newest-cni-432858
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    

Test skip (36/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
115 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
244 TestNetworkPlugins/group/kubenet 3.85
252 TestNetworkPlugins/group/cilium 7.04
258 TestStartStop/group/disable-driver-mounts 0.31
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-468067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.83.236:8443
name: cert-expiration-349610
contexts:
- context:
cluster: cert-expiration-349610
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-349610
name: cert-expiration-349610
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-349610
user:
client-certificate: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.crt
client-key: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-468067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468067"

                                                
                                                
----------------------- debugLogs end: kubenet-468067 [took: 3.674993483s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-468067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-468067
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-468067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-468067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-333834/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.83.236:8443
name: cert-expiration-349610
contexts:
- context:
cluster: cert-expiration-349610
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:47:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-349610
name: cert-expiration-349610
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-349610
user:
client-certificate: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.crt
client-key: /home/jenkins/minikube-integration/17644-333834/.minikube/profiles/cert-expiration-349610/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-468067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-468067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468067"

                                                
                                                
----------------------- debugLogs end: cilium-468067 [took: 6.843448171s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-468067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-468067
--- SKIP: TestNetworkPlugins/group/cilium (7.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-714413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-714413
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
Copied to clipboard