Test Report: KVM_Linux_containerd 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (2/303)

Order failed test Duration
27 TestAddons/parallel/MetricsServer 8.36
102 TestFunctional/parallel/License 0.18
x
+
TestAddons/parallel/MetricsServer (8.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.062586ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-22fzn" [ea3c4280-fe08-4041-826d-ed2440bd17d9] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010652231s
addons_test.go:391: (dbg) Run:  kubectl --context addons-061866 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-061866 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (464.30526ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:40:55.735327   15162 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:40:55.735464   15162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:55.735474   15162 out.go:309] Setting ErrFile to fd 2...
	I0717 21:40:55.735478   15162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:55.735661   15162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 21:40:55.735892   15162 addons.go:594] checking whether the cluster is paused
	I0717 21:40:55.736176   15162 config.go:182] Loaded profile config "addons-061866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:40:55.736192   15162 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:40:55.736528   15162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:40:55.736568   15162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:40:55.751436   15162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34863
	I0717 21:40:55.751832   15162 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:40:55.752456   15162 main.go:141] libmachine: Using API Version  1
	I0717 21:40:55.752501   15162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:40:55.752811   15162 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:40:55.753032   15162 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:40:55.754595   15162 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:40:55.754843   15162 ssh_runner.go:195] Run: systemctl --version
	I0717 21:40:55.754865   15162 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:40:55.757252   15162 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:40:55.757728   15162 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:40:55.757765   15162 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:40:55.757925   15162 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:40:55.758113   15162 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:40:55.758260   15162 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:40:55.758400   15162 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:40:55.943818   15162 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 21:40:55.943892   15162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:40:56.061299   15162 cri.go:89] found id: "3715ab16d3d15db3c07b9df1a8fe9234c20ace28442871db1e3988a11494a21f"
	I0717 21:40:56.061329   15162 cri.go:89] found id: "8db02afe3d545f49d879867717a8d205cbbcd9b9aec7341bde9398364d97661a"
	I0717 21:40:56.061336   15162 cri.go:89] found id: "f95e0e3a3ae2c744ff8bf009a4ebd2ba72ed7399f33b1830e95e1d601bfc6471"
	I0717 21:40:56.061341   15162 cri.go:89] found id: "ba4ab0c92ecd0c057cf054e064c1593856d216a81c7c968706ee9af0010a6ebc"
	I0717 21:40:56.061346   15162 cri.go:89] found id: "9baaca47bd3f1c1791d03b56ca0b427a892f54b4d0f782cb18a2d8d6035b3128"
	I0717 21:40:56.061353   15162 cri.go:89] found id: "d49828f6415a1e89cd825de8f5fdf6e1f554c5dd5c74331681612a75579774d8"
	I0717 21:40:56.061357   15162 cri.go:89] found id: "079cd56ac82fd6a22c27046fc31a2bf72146a3a7a78a559e52cbabd8e94e0cd5"
	I0717 21:40:56.061365   15162 cri.go:89] found id: "f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f"
	I0717 21:40:56.061370   15162 cri.go:89] found id: "1ef7b3cb180c37bf23c9247494f3f0e01be2e9d686e0a343cb61c83a44784c3d"
	I0717 21:40:56.061379   15162 cri.go:89] found id: "61de3533a1b1629adf172a4b28990cc792781e90e936ce3f18366779c8fb621f"
	I0717 21:40:56.061399   15162 cri.go:89] found id: "e684f1e85e82d34e91846df0f78dce5a6fd19ad5930595b58011269c6a3d2285"
	I0717 21:40:56.061412   15162 cri.go:89] found id: "74dedc42bfc91d32f4deee239ca6d44facd3bab15ff131463120bd948415b6c4"
	I0717 21:40:56.061418   15162 cri.go:89] found id: "596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283"
	I0717 21:40:56.061423   15162 cri.go:89] found id: "1f08d9322f44715af00c455f97f0394ad27032d397907dbf76646d2af20ac457"
	I0717 21:40:56.061433   15162 cri.go:89] found id: "f4d37308b6c0df724216a8e3ba147cbb15d10c615d552f4e6357f13ed7ba24e7"
	I0717 21:40:56.061439   15162 cri.go:89] found id: "6295a6b255349518dbec8cf6f5291a746357c871aa0f5689a7bf66e5b244b6ed"
	I0717 21:40:56.061445   15162 cri.go:89] found id: "9b55249ffe6ce15b38ca859e988eff83ef2f8a082546a4e63df70dc3c63fc45f"
	I0717 21:40:56.061452   15162 cri.go:89] found id: "d4060b2bcf0480e16f74e1a05450c31935d145b754d799b39988d9c766ad7962"
	I0717 21:40:56.061459   15162 cri.go:89] found id: "a2dd7da36eb51d21dc2aea8d4ddd870a6930ce8a2168392f5cf19ab7843f8b97"
	I0717 21:40:56.061464   15162 cri.go:89] found id: "38cd1c7e4c690e225d0d7c9edbd7d0c643ab2da29455e750f9ff0cbe84737b11"
	I0717 21:40:56.061471   15162 cri.go:89] found id: "12c61943f7013b1064e2b4fa58d23ebd2ae733b2949d65838211e806ce8e4bff"
	I0717 21:40:56.061477   15162 cri.go:89] found id: "8dd823dc58ee1ae2b6cab8a7f0c993c164e3d12837e154899650785caebc1b26"
	I0717 21:40:56.061482   15162 cri.go:89] found id: "a78fef0dbf515a5dabf93afa264da2ac9ecc875f1ab8b53bd2465c22917faf10"
	I0717 21:40:56.061488   15162 cri.go:89] found id: ""
	I0717 21:40:56.061540   15162 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0717 21:40:56.152273   15162 main.go:141] libmachine: Making call to close driver server
	I0717 21:40:56.152300   15162 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:40:56.152607   15162 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:40:56.152703   15162 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:40:56.152736   15162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:40:56.155470   15162 out.go:177] 
	W0717 21:40:56.156860   15162 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T21:40:56Z" level=error msg="stat /run/containerd/runc/k8s.io/f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T21:40:56Z" level=error msg="stat /run/containerd/runc/k8s.io/f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f: no such file or directory"
	
	W0717 21:40:56.156876   15162 out.go:239] * 
	* 
	W0717 21:40:56.158380   15162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 21:40:56.160049   15162 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:410: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-061866 addons disable metrics-server --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-061866 -n addons-061866
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-061866 logs -n 25: (1.833567177s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:37 UTC |                     |
	|         | -p download-only-057626        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |                     |
	|         | -p download-only-057626        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC | 17 Jul 23 21:38 UTC |
	| delete  | -p download-only-057626        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC | 17 Jul 23 21:38 UTC |
	| delete  | -p download-only-057626        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC | 17 Jul 23 21:38 UTC |
	| start   | --download-only -p             | binary-mirror-260672 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |                     |
	|         | binary-mirror-260672           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43749         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-260672        | binary-mirror-260672 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC | 17 Jul 23 21:38 UTC |
	| start   | -p addons-061866               | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC | 17 Jul 23 21:40 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	|         | -p addons-061866               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	|         | addons-061866                  |                      |         |         |                     |                     |
	| ip      | addons-061866 ip               | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	| addons  | addons-061866 addons disable   | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-061866 addons           | addons-061866        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |                     |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:38:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:38:11.169865   14127 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:38:11.170016   14127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:11.170025   14127 out.go:309] Setting ErrFile to fd 2...
	I0717 21:38:11.170030   14127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:11.170248   14127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 21:38:11.170881   14127 out.go:303] Setting JSON to false
	I0717 21:38:11.171658   14127 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1243,"bootTime":1689628648,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:38:11.171715   14127 start.go:138] virtualization: kvm guest
	I0717 21:38:11.174067   14127 out.go:177] * [addons-061866] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:38:11.175616   14127 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:38:11.177209   14127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:38:11.175634   14127 notify.go:220] Checking for updates...
	I0717 21:38:11.180259   14127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:38:11.182138   14127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:38:11.183734   14127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:38:11.185205   14127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:38:11.186890   14127 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:38:11.219586   14127 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 21:38:11.221068   14127 start.go:298] selected driver: kvm2
	I0717 21:38:11.221080   14127 start.go:880] validating driver "kvm2" against <nil>
	I0717 21:38:11.221091   14127 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:38:11.221894   14127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:38:11.221983   14127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-6542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:38:11.236280   14127 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:38:11.236335   14127 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:38:11.236595   14127 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:38:11.236633   14127 cni.go:84] Creating CNI manager for ""
	I0717 21:38:11.236650   14127 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0717 21:38:11.236665   14127 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 21:38:11.236673   14127 start_flags.go:319] config:
	{Name:addons-061866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-061866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugi
n:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:11.236850   14127 iso.go:125] acquiring lock: {Name:mk2c3e3c0e4d92ba8dafc265e87aade8da278690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:38:11.239610   14127 out.go:177] * Starting control plane node addons-061866 in cluster addons-061866
	I0717 21:38:11.241116   14127 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:11.241151   14127 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 21:38:11.241168   14127 cache.go:57] Caching tarball of preloaded images
	I0717 21:38:11.241256   14127 preload.go:174] Found /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 21:38:11.241271   14127 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 21:38:11.241545   14127 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/config.json ...
	I0717 21:38:11.241567   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/config.json: {Name:mk4268c9c65309c384d9681e1bb1667e2193260f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:11.241739   14127 start.go:365] acquiring machines lock for addons-061866: {Name:mkc8705f1b50057ef70658c3e47a2f210f86e2bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 21:38:11.241803   14127 start.go:369] acquired machines lock for "addons-061866" in 40.323µs
	I0717 21:38:11.241826   14127 start.go:93] Provisioning new machine with config: &{Name:addons-061866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-061866
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 21:38:11.241895   14127 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 21:38:11.243591   14127 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 21:38:11.243725   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:38:11.243780   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:38:11.257308   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0717 21:38:11.257717   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:38:11.258274   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:38:11.258301   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:38:11.258632   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:38:11.258801   14127 main.go:141] libmachine: (addons-061866) Calling .GetMachineName
	I0717 21:38:11.258952   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:11.259098   14127 start.go:159] libmachine.API.Create for "addons-061866" (driver="kvm2")
	I0717 21:38:11.259124   14127 client.go:168] LocalClient.Create starting
	I0717 21:38:11.259158   14127 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem
	I0717 21:38:11.373585   14127 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/cert.pem
	I0717 21:38:11.470050   14127 main.go:141] libmachine: Running pre-create checks...
	I0717 21:38:11.470074   14127 main.go:141] libmachine: (addons-061866) Calling .PreCreateCheck
	I0717 21:38:11.470531   14127 main.go:141] libmachine: (addons-061866) Calling .GetConfigRaw
	I0717 21:38:11.470943   14127 main.go:141] libmachine: Creating machine...
	I0717 21:38:11.470957   14127 main.go:141] libmachine: (addons-061866) Calling .Create
	I0717 21:38:11.471078   14127 main.go:141] libmachine: (addons-061866) Creating KVM machine...
	I0717 21:38:11.472139   14127 main.go:141] libmachine: (addons-061866) DBG | found existing default KVM network
	I0717 21:38:11.472803   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:11.472672   14149 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298b0}
	I0717 21:38:11.478487   14127 main.go:141] libmachine: (addons-061866) DBG | trying to create private KVM network mk-addons-061866 192.168.39.0/24...
	I0717 21:38:11.543496   14127 main.go:141] libmachine: (addons-061866) DBG | private KVM network mk-addons-061866 192.168.39.0/24 created
	I0717 21:38:11.543530   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:11.543439   14149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:38:11.543546   14127 main.go:141] libmachine: (addons-061866) Setting up store path in /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866 ...
	I0717 21:38:11.543561   14127 main.go:141] libmachine: (addons-061866) Building disk image from file:///home/jenkins/minikube-integration/16899-6542/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 21:38:11.543609   14127 main.go:141] libmachine: (addons-061866) Downloading /home/jenkins/minikube-integration/16899-6542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-6542/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 21:38:11.753477   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:11.753368   14149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa...
	I0717 21:38:11.949400   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:11.949275   14149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/addons-061866.rawdisk...
	I0717 21:38:11.949453   14127 main.go:141] libmachine: (addons-061866) DBG | Writing magic tar header
	I0717 21:38:11.949466   14127 main.go:141] libmachine: (addons-061866) DBG | Writing SSH key tar header
	I0717 21:38:11.949479   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:11.949384   14149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866 ...
	I0717 21:38:11.949554   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866
	I0717 21:38:11.949588   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-6542/.minikube/machines
	I0717 21:38:11.949604   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866 (perms=drwx------)
	I0717 21:38:11.949616   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:38:11.949625   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-6542
	I0717 21:38:11.949637   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 21:38:11.949649   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home/jenkins
	I0717 21:38:11.949666   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins/minikube-integration/16899-6542/.minikube/machines (perms=drwxr-xr-x)
	I0717 21:38:11.949679   14127 main.go:141] libmachine: (addons-061866) DBG | Checking permissions on dir: /home
	I0717 21:38:11.949695   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins/minikube-integration/16899-6542/.minikube (perms=drwxr-xr-x)
	I0717 21:38:11.949705   14127 main.go:141] libmachine: (addons-061866) DBG | Skipping /home - not owner
	I0717 21:38:11.949720   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins/minikube-integration/16899-6542 (perms=drwxrwxr-x)
	I0717 21:38:11.949730   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 21:38:11.949737   14127 main.go:141] libmachine: (addons-061866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 21:38:11.949742   14127 main.go:141] libmachine: (addons-061866) Creating domain...
	I0717 21:38:11.950721   14127 main.go:141] libmachine: (addons-061866) define libvirt domain using xml: 
	I0717 21:38:11.950748   14127 main.go:141] libmachine: (addons-061866) <domain type='kvm'>
	I0717 21:38:11.950759   14127 main.go:141] libmachine: (addons-061866)   <name>addons-061866</name>
	I0717 21:38:11.950768   14127 main.go:141] libmachine: (addons-061866)   <memory unit='MiB'>4000</memory>
	I0717 21:38:11.950778   14127 main.go:141] libmachine: (addons-061866)   <vcpu>2</vcpu>
	I0717 21:38:11.950794   14127 main.go:141] libmachine: (addons-061866)   <features>
	I0717 21:38:11.950807   14127 main.go:141] libmachine: (addons-061866)     <acpi/>
	I0717 21:38:11.950820   14127 main.go:141] libmachine: (addons-061866)     <apic/>
	I0717 21:38:11.950829   14127 main.go:141] libmachine: (addons-061866)     <pae/>
	I0717 21:38:11.950837   14127 main.go:141] libmachine: (addons-061866)     
	I0717 21:38:11.950845   14127 main.go:141] libmachine: (addons-061866)   </features>
	I0717 21:38:11.950853   14127 main.go:141] libmachine: (addons-061866)   <cpu mode='host-passthrough'>
	I0717 21:38:11.950866   14127 main.go:141] libmachine: (addons-061866)   
	I0717 21:38:11.950878   14127 main.go:141] libmachine: (addons-061866)   </cpu>
	I0717 21:38:11.950914   14127 main.go:141] libmachine: (addons-061866)   <os>
	I0717 21:38:11.950947   14127 main.go:141] libmachine: (addons-061866)     <type>hvm</type>
	I0717 21:38:11.950983   14127 main.go:141] libmachine: (addons-061866)     <boot dev='cdrom'/>
	I0717 21:38:11.951003   14127 main.go:141] libmachine: (addons-061866)     <boot dev='hd'/>
	I0717 21:38:11.951015   14127 main.go:141] libmachine: (addons-061866)     <bootmenu enable='no'/>
	I0717 21:38:11.951029   14127 main.go:141] libmachine: (addons-061866)   </os>
	I0717 21:38:11.951042   14127 main.go:141] libmachine: (addons-061866)   <devices>
	I0717 21:38:11.951054   14127 main.go:141] libmachine: (addons-061866)     <disk type='file' device='cdrom'>
	I0717 21:38:11.951073   14127 main.go:141] libmachine: (addons-061866)       <source file='/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/boot2docker.iso'/>
	I0717 21:38:11.951090   14127 main.go:141] libmachine: (addons-061866)       <target dev='hdc' bus='scsi'/>
	I0717 21:38:11.951104   14127 main.go:141] libmachine: (addons-061866)       <readonly/>
	I0717 21:38:11.951116   14127 main.go:141] libmachine: (addons-061866)     </disk>
	I0717 21:38:11.951127   14127 main.go:141] libmachine: (addons-061866)     <disk type='file' device='disk'>
	I0717 21:38:11.951141   14127 main.go:141] libmachine: (addons-061866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 21:38:11.951159   14127 main.go:141] libmachine: (addons-061866)       <source file='/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/addons-061866.rawdisk'/>
	I0717 21:38:11.951172   14127 main.go:141] libmachine: (addons-061866)       <target dev='hda' bus='virtio'/>
	I0717 21:38:11.951185   14127 main.go:141] libmachine: (addons-061866)     </disk>
	I0717 21:38:11.951197   14127 main.go:141] libmachine: (addons-061866)     <interface type='network'>
	I0717 21:38:11.951211   14127 main.go:141] libmachine: (addons-061866)       <source network='mk-addons-061866'/>
	I0717 21:38:11.951227   14127 main.go:141] libmachine: (addons-061866)       <model type='virtio'/>
	I0717 21:38:11.951241   14127 main.go:141] libmachine: (addons-061866)     </interface>
	I0717 21:38:11.951260   14127 main.go:141] libmachine: (addons-061866)     <interface type='network'>
	I0717 21:38:11.951274   14127 main.go:141] libmachine: (addons-061866)       <source network='default'/>
	I0717 21:38:11.951284   14127 main.go:141] libmachine: (addons-061866)       <model type='virtio'/>
	I0717 21:38:11.951302   14127 main.go:141] libmachine: (addons-061866)     </interface>
	I0717 21:38:11.951318   14127 main.go:141] libmachine: (addons-061866)     <serial type='pty'>
	I0717 21:38:11.951336   14127 main.go:141] libmachine: (addons-061866)       <target port='0'/>
	I0717 21:38:11.951351   14127 main.go:141] libmachine: (addons-061866)     </serial>
	I0717 21:38:11.951364   14127 main.go:141] libmachine: (addons-061866)     <console type='pty'>
	I0717 21:38:11.951377   14127 main.go:141] libmachine: (addons-061866)       <target type='serial' port='0'/>
	I0717 21:38:11.951389   14127 main.go:141] libmachine: (addons-061866)     </console>
	I0717 21:38:11.951399   14127 main.go:141] libmachine: (addons-061866)     <rng model='virtio'>
	I0717 21:38:11.951410   14127 main.go:141] libmachine: (addons-061866)       <backend model='random'>/dev/random</backend>
	I0717 21:38:11.951424   14127 main.go:141] libmachine: (addons-061866)     </rng>
	I0717 21:38:11.951435   14127 main.go:141] libmachine: (addons-061866)     
	I0717 21:38:11.951450   14127 main.go:141] libmachine: (addons-061866)     
	I0717 21:38:11.951466   14127 main.go:141] libmachine: (addons-061866)   </devices>
	I0717 21:38:11.951486   14127 main.go:141] libmachine: (addons-061866) </domain>
	I0717 21:38:11.951498   14127 main.go:141] libmachine: (addons-061866) 
	I0717 21:38:11.957115   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:71:32:0f in network default
	I0717 21:38:11.957626   14127 main.go:141] libmachine: (addons-061866) Ensuring networks are active...
	I0717 21:38:11.957653   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:11.958225   14127 main.go:141] libmachine: (addons-061866) Ensuring network default is active
	I0717 21:38:11.958511   14127 main.go:141] libmachine: (addons-061866) Ensuring network mk-addons-061866 is active
	I0717 21:38:11.958964   14127 main.go:141] libmachine: (addons-061866) Getting domain xml...
	I0717 21:38:11.959635   14127 main.go:141] libmachine: (addons-061866) Creating domain...
	I0717 21:38:13.349929   14127 main.go:141] libmachine: (addons-061866) Waiting to get IP...
	I0717 21:38:13.350653   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:13.351027   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:13.351088   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:13.351031   14149 retry.go:31] will retry after 309.158837ms: waiting for machine to come up
	I0717 21:38:13.661578   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:13.661905   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:13.661930   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:13.661872   14149 retry.go:31] will retry after 367.145447ms: waiting for machine to come up
	I0717 21:38:14.030384   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:14.030806   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:14.030834   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:14.030749   14149 retry.go:31] will retry after 431.090636ms: waiting for machine to come up
	I0717 21:38:14.463212   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:14.463653   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:14.463683   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:14.463604   14149 retry.go:31] will retry after 546.211784ms: waiting for machine to come up
	I0717 21:38:15.011006   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:15.011349   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:15.011373   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:15.011315   14149 retry.go:31] will retry after 747.587225ms: waiting for machine to come up
	I0717 21:38:15.760258   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:15.760668   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:15.760701   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:15.760612   14149 retry.go:31] will retry after 830.180905ms: waiting for machine to come up
	I0717 21:38:16.592157   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:16.592494   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:16.592617   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:16.592474   14149 retry.go:31] will retry after 1.177327304s: waiting for machine to come up
	I0717 21:38:17.771843   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:17.772224   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:17.772252   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:17.772175   14149 retry.go:31] will retry after 1.011681574s: waiting for machine to come up
	I0717 21:38:18.785284   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:18.785622   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:18.785653   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:18.785578   14149 retry.go:31] will retry after 1.772578693s: waiting for machine to come up
	I0717 21:38:20.560382   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:20.560728   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:20.560792   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:20.560660   14149 retry.go:31] will retry after 1.772071188s: waiting for machine to come up
	I0717 21:38:22.334557   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:22.334926   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:22.334948   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:22.334892   14149 retry.go:31] will retry after 1.816579901s: waiting for machine to come up
	I0717 21:38:24.153843   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:24.154381   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:24.154411   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:24.154344   14149 retry.go:31] will retry after 3.20740586s: waiting for machine to come up
	I0717 21:38:27.363255   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:27.363590   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:27.363616   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:27.363563   14149 retry.go:31] will retry after 3.828810159s: waiting for machine to come up
	I0717 21:38:31.194658   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:31.195035   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find current IP address of domain addons-061866 in network mk-addons-061866
	I0717 21:38:31.195056   14127 main.go:141] libmachine: (addons-061866) DBG | I0717 21:38:31.195013   14149 retry.go:31] will retry after 5.321596499s: waiting for machine to come up
	I0717 21:38:36.518173   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.518596   14127 main.go:141] libmachine: (addons-061866) Found IP for machine: 192.168.39.55
	I0717 21:38:36.518621   14127 main.go:141] libmachine: (addons-061866) Reserving static IP address...
	I0717 21:38:36.518641   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has current primary IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.519003   14127 main.go:141] libmachine: (addons-061866) DBG | unable to find host DHCP lease matching {name: "addons-061866", mac: "52:54:00:a9:3c:50", ip: "192.168.39.55"} in network mk-addons-061866
	I0717 21:38:36.593743   14127 main.go:141] libmachine: (addons-061866) DBG | Getting to WaitForSSH function...
	I0717 21:38:36.593768   14127 main.go:141] libmachine: (addons-061866) Reserved static IP address: 192.168.39.55
	I0717 21:38:36.593776   14127 main.go:141] libmachine: (addons-061866) Waiting for SSH to be available...
	I0717 21:38:36.596501   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.596895   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:36.596926   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.597098   14127 main.go:141] libmachine: (addons-061866) DBG | Using SSH client type: external
	I0717 21:38:36.597130   14127 main.go:141] libmachine: (addons-061866) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa (-rw-------)
	I0717 21:38:36.597165   14127 main.go:141] libmachine: (addons-061866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 21:38:36.597193   14127 main.go:141] libmachine: (addons-061866) DBG | About to run SSH command:
	I0717 21:38:36.597209   14127 main.go:141] libmachine: (addons-061866) DBG | exit 0
	I0717 21:38:36.701144   14127 main.go:141] libmachine: (addons-061866) DBG | SSH cmd err, output: <nil>: 
	I0717 21:38:36.701432   14127 main.go:141] libmachine: (addons-061866) KVM machine creation complete!
	I0717 21:38:36.701726   14127 main.go:141] libmachine: (addons-061866) Calling .GetConfigRaw
	I0717 21:38:36.702219   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:36.702405   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:36.702551   14127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 21:38:36.702564   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:38:36.703817   14127 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 21:38:36.703836   14127 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 21:38:36.703846   14127 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 21:38:36.703858   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:36.706276   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.706653   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:36.706680   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.706848   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:36.707018   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.707266   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.707401   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:36.707571   14127 main.go:141] libmachine: Using SSH client type: native
	I0717 21:38:36.708154   14127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0717 21:38:36.708174   14127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 21:38:36.840405   14127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:38:36.840448   14127 main.go:141] libmachine: Detecting the provisioner...
	I0717 21:38:36.840460   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:36.843262   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.843638   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:36.843665   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.843873   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:36.844091   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.844222   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.844362   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:36.844576   14127 main.go:141] libmachine: Using SSH client type: native
	I0717 21:38:36.844989   14127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0717 21:38:36.845002   14127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 21:38:36.978238   14127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 21:38:36.978314   14127 main.go:141] libmachine: found compatible host: buildroot
	I0717 21:38:36.978324   14127 main.go:141] libmachine: Provisioning with buildroot...
	I0717 21:38:36.978334   14127 main.go:141] libmachine: (addons-061866) Calling .GetMachineName
	I0717 21:38:36.978593   14127 buildroot.go:166] provisioning hostname "addons-061866"
	I0717 21:38:36.978614   14127 main.go:141] libmachine: (addons-061866) Calling .GetMachineName
	I0717 21:38:36.978755   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:36.981549   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.981934   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:36.981974   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:36.982099   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:36.982324   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.982563   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:36.982669   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:36.982844   14127 main.go:141] libmachine: Using SSH client type: native
	I0717 21:38:36.983216   14127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0717 21:38:36.983230   14127 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-061866 && echo "addons-061866" | sudo tee /etc/hostname
	I0717 21:38:37.130286   14127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-061866
	
	I0717 21:38:37.130308   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:37.132817   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.133175   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.133205   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.133488   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:37.133723   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.133920   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.134109   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:37.134285   14127 main.go:141] libmachine: Using SSH client type: native
	I0717 21:38:37.134678   14127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0717 21:38:37.134720   14127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-061866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-061866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-061866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:38:37.279719   14127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:38:37.279751   14127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-6542/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-6542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-6542/.minikube}
	I0717 21:38:37.279780   14127 buildroot.go:174] setting up certificates
	I0717 21:38:37.279793   14127 provision.go:83] configureAuth start
	I0717 21:38:37.279806   14127 main.go:141] libmachine: (addons-061866) Calling .GetMachineName
	I0717 21:38:37.280075   14127 main.go:141] libmachine: (addons-061866) Calling .GetIP
	I0717 21:38:37.282563   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.282997   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.283023   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.283223   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:37.285511   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.285940   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.285968   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.286016   14127 provision.go:138] copyHostCerts
	I0717 21:38:37.286072   14127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-6542/.minikube/ca.pem (1082 bytes)
	I0717 21:38:37.286225   14127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-6542/.minikube/cert.pem (1123 bytes)
	I0717 21:38:37.286285   14127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-6542/.minikube/key.pem (1675 bytes)
	I0717 21:38:37.286335   14127 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-6542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca-key.pem org=jenkins.addons-061866 san=[192.168.39.55 192.168.39.55 localhost 127.0.0.1 minikube addons-061866]
	I0717 21:38:37.594626   14127 provision.go:172] copyRemoteCerts
	I0717 21:38:37.594683   14127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:38:37.594704   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:37.597590   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.597946   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.597978   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.598187   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:37.598397   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.598542   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:37.598703   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:38:37.695770   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:38:37.719248   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 21:38:37.743210   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:38:37.766704   14127 provision.go:86] duration metric: configureAuth took 486.895198ms
	I0717 21:38:37.766742   14127 buildroot.go:189] setting minikube options for container-runtime
	I0717 21:38:37.766943   14127 config.go:182] Loaded profile config "addons-061866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:38:37.766966   14127 main.go:141] libmachine: Checking connection to Docker...
	I0717 21:38:37.766983   14127 main.go:141] libmachine: (addons-061866) Calling .GetURL
	I0717 21:38:37.768124   14127 main.go:141] libmachine: (addons-061866) DBG | Using libvirt version 6000000
	I0717 21:38:37.770164   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.770460   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.770484   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.770633   14127 main.go:141] libmachine: Docker is up and running!
	I0717 21:38:37.770696   14127 main.go:141] libmachine: Reticulating splines...
	I0717 21:38:37.770704   14127 client.go:171] LocalClient.Create took 26.511571388s
	I0717 21:38:37.770738   14127 start.go:167] duration metric: libmachine.API.Create for "addons-061866" took 26.511638227s
	I0717 21:38:37.770750   14127 start.go:300] post-start starting for "addons-061866" (driver="kvm2")
	I0717 21:38:37.770763   14127 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:38:37.770790   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:37.771035   14127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:38:37.771063   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:37.773326   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.773634   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.773664   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.773807   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:37.773984   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.774183   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:37.774335   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:38:37.871595   14127 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:38:37.876034   14127 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 21:38:37.876059   14127 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6542/.minikube/addons for local assets ...
	I0717 21:38:37.876144   14127 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-6542/.minikube/files for local assets ...
	I0717 21:38:37.876175   14127 start.go:303] post-start completed in 105.415808ms
	I0717 21:38:37.876214   14127 main.go:141] libmachine: (addons-061866) Calling .GetConfigRaw
	I0717 21:38:37.876769   14127 main.go:141] libmachine: (addons-061866) Calling .GetIP
	I0717 21:38:37.879359   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.879726   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.879761   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.880058   14127 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/config.json ...
	I0717 21:38:37.880227   14127 start.go:128] duration metric: createHost completed in 26.638324507s
	I0717 21:38:37.880249   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:37.882421   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.882750   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:37.882773   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:37.882946   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:37.883107   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.883222   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:37.883318   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:37.883493   14127 main.go:141] libmachine: Using SSH client type: native
	I0717 21:38:37.883919   14127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0717 21:38:37.883932   14127 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 21:38:38.018037   14127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689629918.002691793
	
	I0717 21:38:38.018063   14127 fix.go:206] guest clock: 1689629918.002691793
	I0717 21:38:38.018076   14127 fix.go:219] Guest: 2023-07-17 21:38:38.002691793 +0000 UTC Remote: 2023-07-17 21:38:37.88023924 +0000 UTC m=+26.741348792 (delta=122.452553ms)
	I0717 21:38:38.018131   14127 fix.go:190] guest clock delta is within tolerance: 122.452553ms
	I0717 21:38:38.018142   14127 start.go:83] releasing machines lock for "addons-061866", held for 26.77632608s
	I0717 21:38:38.018179   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:38.018437   14127 main.go:141] libmachine: (addons-061866) Calling .GetIP
	I0717 21:38:38.020893   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.021374   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:38.021401   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.021573   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:38.022041   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:38.022212   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:38:38.022304   14127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:38:38.022357   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:38.022426   14127 ssh_runner.go:195] Run: cat /version.json
	I0717 21:38:38.022458   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:38:38.024839   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.025112   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.025152   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:38.025193   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.025357   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:38.025536   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:38.025569   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:38.025593   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:38.025709   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:38.025720   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:38:38.025859   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:38:38.025872   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:38:38.025972   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:38:38.026089   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:38:38.142966   14127 ssh_runner.go:195] Run: systemctl --version
	I0717 21:38:38.148969   14127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 21:38:38.154570   14127 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 21:38:38.154659   14127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:38:38.170061   14127 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:38:38.170090   14127 start.go:466] detecting cgroup driver to use...
	I0717 21:38:38.170154   14127 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 21:38:38.201537   14127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 21:38:38.213848   14127 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:38:38.213919   14127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:38:38.226703   14127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:38:38.239823   14127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:38:38.348307   14127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:38:38.466846   14127 docker.go:212] disabling docker service ...
	I0717 21:38:38.466920   14127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:38:38.480804   14127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:38:38.493739   14127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:38:38.604220   14127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:38:38.712281   14127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:38:38.726205   14127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:38:38.743826   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 21:38:38.754438   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 21:38:38.765267   14127 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 21:38:38.765336   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 21:38:38.776111   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 21:38:38.786968   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 21:38:38.798150   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 21:38:38.808969   14127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:38:38.819926   14127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 21:38:38.831904   14127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:38:38.842728   14127 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 21:38:38.842790   14127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 21:38:38.858680   14127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:38:38.869942   14127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:38:38.984833   14127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 21:38:39.016522   14127 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 21:38:39.016599   14127 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 21:38:39.022180   14127 retry.go:31] will retry after 829.623214ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0717 21:38:39.852232   14127 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 21:38:39.858177   14127 start.go:534] Will wait 60s for crictl version
	I0717 21:38:39.858276   14127 ssh_runner.go:195] Run: which crictl
	I0717 21:38:39.862592   14127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:38:39.895652   14127 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.2
	RuntimeApiVersion:  v1alpha2
	I0717 21:38:39.895727   14127 ssh_runner.go:195] Run: containerd --version
	I0717 21:38:39.921946   14127 ssh_runner.go:195] Run: containerd --version
	I0717 21:38:39.951550   14127 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.7.2 ...
	I0717 21:38:39.953073   14127 main.go:141] libmachine: (addons-061866) Calling .GetIP
	I0717 21:38:39.955589   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:39.955934   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:38:39.955966   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:38:39.956176   14127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 21:38:39.960553   14127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:38:39.975168   14127 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:39.975223   14127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:38:40.009961   14127 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 21:38:40.010035   14127 ssh_runner.go:195] Run: which lz4
	I0717 21:38:40.013974   14127 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 21:38:40.018342   14127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 21:38:40.018374   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (436670020 bytes)
	I0717 21:38:41.738646   14127 containerd.go:547] Took 1.724701 seconds to copy over tarball
	I0717 21:38:41.738713   14127 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 21:38:44.663354   14127 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.924614074s)
	I0717 21:38:44.663384   14127 containerd.go:554] Took 2.924715 seconds to extract the tarball
	I0717 21:38:44.663399   14127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 21:38:44.704886   14127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:38:44.807072   14127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 21:38:44.830579   14127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:38:45.872829   14127 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.04221771s)
	I0717 21:38:45.872931   14127 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 21:38:45.872944   14127 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:38:45.873002   14127 ssh_runner.go:195] Run: sudo crictl info
	I0717 21:38:45.904955   14127 cni.go:84] Creating CNI manager for ""
	I0717 21:38:45.904983   14127 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0717 21:38:45.905009   14127 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:38:45.905035   14127 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-061866 NodeName:addons-061866 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:38:45.905315   14127 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-061866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:38:45.905414   14127 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-061866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-061866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:38:45.905483   14127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:38:45.916183   14127 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:38:45.916259   14127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:38:45.926203   14127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0717 21:38:45.943391   14127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:38:45.960353   14127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 21:38:45.977457   14127 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0717 21:38:45.981654   14127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:38:45.993536   14127 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866 for IP: 192.168.39.55
	I0717 21:38:45.993570   14127 certs.go:190] acquiring lock for shared ca certs: {Name:mkb479f4f6c65a9d74086db1804c80bb8532c90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:45.993753   14127 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16899-6542/.minikube/ca.key
	I0717 21:38:46.320100   14127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6542/.minikube/ca.crt ...
	I0717 21:38:46.320131   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/ca.crt: {Name:mk1b2c9b3e2db60fff86acec2670438e44bbb726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.320294   14127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6542/.minikube/ca.key ...
	I0717 21:38:46.320306   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/ca.key: {Name:mk983354af3860c63257bbad41b0e31fa7cd9987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.320371   14127 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.key
	I0717 21:38:46.532529   14127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.crt ...
	I0717 21:38:46.532558   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.crt: {Name:mk1821d9ea5a734564cc1b0b17aa8ce72e126cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.532705   14127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.key ...
	I0717 21:38:46.532715   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.key: {Name:mka65ce7c8ba9233f080dbe8006db7df46a07bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.532808   14127 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.key
	I0717 21:38:46.532828   14127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt with IP's: []
	I0717 21:38:46.603513   14127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt ...
	I0717 21:38:46.603543   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: {Name:mkf2a50c94fcb6d98d6f226a8911b74933c8ec0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.603681   14127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.key ...
	I0717 21:38:46.603691   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.key: {Name:mkc9b82fa32ba4748e104576511994a4df9da960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.603756   14127 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key.23a33066
	I0717 21:38:46.603779   14127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt.23a33066 with IP's: [192.168.39.55 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:38:46.700722   14127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt.23a33066 ...
	I0717 21:38:46.700756   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt.23a33066: {Name:mk4c1a756bf4a2e10b063ed0e7f510fabc0611fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.700956   14127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key.23a33066 ...
	I0717 21:38:46.700972   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key.23a33066: {Name:mk8600a87a2a118bfd85632e3019b5edeb26c334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.701058   14127 certs.go:337] copying /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt.23a33066 -> /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt
	I0717 21:38:46.701140   14127 certs.go:341] copying /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key.23a33066 -> /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key
	I0717 21:38:46.701203   14127 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.key
	I0717 21:38:46.701243   14127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.crt with IP's: []
	I0717 21:38:46.761684   14127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.crt ...
	I0717 21:38:46.761716   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.crt: {Name:mk7998601f45f7f152f1a3b37fc6b088e1946469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.761889   14127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.key ...
	I0717 21:38:46.761904   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.key: {Name:mk392e2d9946e9f7dbdd7c0567c2105b55050b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:38:46.762090   14127 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 21:38:46.762136   14127 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/home/jenkins/minikube-integration/16899-6542/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:38:46.762174   14127 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/home/jenkins/minikube-integration/16899-6542/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:38:46.762210   14127 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-6542/.minikube/certs/home/jenkins/minikube-integration/16899-6542/.minikube/certs/key.pem (1675 bytes)
	I0717 21:38:46.762724   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:38:46.786730   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 21:38:46.808804   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:38:46.832258   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 21:38:46.854124   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:38:46.875945   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:38:46.898633   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:38:46.920079   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 21:38:46.942635   14127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-6542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:38:46.964148   14127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:38:46.979825   14127 ssh_runner.go:195] Run: openssl version
	I0717 21:38:46.985329   14127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:38:46.996187   14127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:38:47.000765   14127 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:38:47.000825   14127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:38:47.006312   14127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:38:47.016901   14127 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:38:47.020895   14127 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:38:47.020938   14127 kubeadm.go:404] StartCluster: {Name:addons-061866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-061866 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:47.021032   14127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 21:38:47.021116   14127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:38:47.052229   14127 cri.go:89] found id: ""
	I0717 21:38:47.052327   14127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:38:47.062408   14127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:38:47.071708   14127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:38:47.081136   14127 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:38:47.081186   14127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 21:38:47.135166   14127 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:38:47.135234   14127 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:38:47.259703   14127 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:38:47.259837   14127 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:38:47.259941   14127 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:38:47.443257   14127 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:38:47.446746   14127 out.go:204]   - Generating certificates and keys ...
	I0717 21:38:47.446898   14127 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:38:47.447006   14127 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:38:47.505769   14127 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:38:47.828779   14127 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:38:47.921914   14127 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:38:48.253047   14127 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:38:48.452719   14127 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:38:48.453004   14127 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-061866 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0717 21:38:48.587404   14127 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:38:48.587599   14127 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-061866 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0717 21:38:48.697070   14127 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:38:48.821404   14127 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:38:49.194998   14127 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:38:49.197770   14127 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:38:49.353809   14127 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:38:49.680842   14127 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:38:49.839956   14127 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:38:50.216692   14127 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:38:50.232933   14127 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:38:50.234050   14127 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:38:50.234155   14127 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:38:50.349928   14127 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:38:50.352057   14127 out.go:204]   - Booting up control plane ...
	I0717 21:38:50.352179   14127 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:38:50.352274   14127 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:38:50.352405   14127 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:38:50.358424   14127 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:38:50.360699   14127 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:38:58.364808   14127 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005019 seconds
	I0717 21:38:58.364941   14127 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:38:58.385882   14127 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:38:58.917936   14127 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:38:58.918179   14127 kubeadm.go:322] [mark-control-plane] Marking the node addons-061866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:38:59.432392   14127 kubeadm.go:322] [bootstrap-token] Using token: v04tzp.j879uztil8ydpi83
	I0717 21:38:59.434167   14127 out.go:204]   - Configuring RBAC rules ...
	I0717 21:38:59.434290   14127 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:38:59.440702   14127 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:38:59.450435   14127 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:38:59.454986   14127 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:38:59.463556   14127 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:38:59.469899   14127 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:38:59.488992   14127 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:38:59.704252   14127 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:38:59.850840   14127 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:38:59.851898   14127 kubeadm.go:322] 
	I0717 21:38:59.851985   14127 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:38:59.852000   14127 kubeadm.go:322] 
	I0717 21:38:59.852093   14127 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:38:59.852105   14127 kubeadm.go:322] 
	I0717 21:38:59.852150   14127 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:38:59.852297   14127 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:38:59.852374   14127 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:38:59.852389   14127 kubeadm.go:322] 
	I0717 21:38:59.852464   14127 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:38:59.852474   14127 kubeadm.go:322] 
	I0717 21:38:59.852518   14127 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:38:59.852524   14127 kubeadm.go:322] 
	I0717 21:38:59.852594   14127 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:38:59.852730   14127 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:38:59.852832   14127 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:38:59.852866   14127 kubeadm.go:322] 
	I0717 21:38:59.853009   14127 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:38:59.853115   14127 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:38:59.853129   14127 kubeadm.go:322] 
	I0717 21:38:59.853262   14127 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v04tzp.j879uztil8ydpi83 \
	I0717 21:38:59.853364   14127 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7c4ede2ab3e4a692f2cb65f2b1537acb3a48788df3dfc1c97d5886c405599d52 \
	I0717 21:38:59.853397   14127 kubeadm.go:322] 	--control-plane 
	I0717 21:38:59.853407   14127 kubeadm.go:322] 
	I0717 21:38:59.853471   14127 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:38:59.853478   14127 kubeadm.go:322] 
	I0717 21:38:59.853547   14127 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v04tzp.j879uztil8ydpi83 \
	I0717 21:38:59.853670   14127 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7c4ede2ab3e4a692f2cb65f2b1537acb3a48788df3dfc1c97d5886c405599d52 
	I0717 21:38:59.856211   14127 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:38:59.856241   14127 cni.go:84] Creating CNI manager for ""
	I0717 21:38:59.856259   14127 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0717 21:38:59.858256   14127 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 21:38:59.859689   14127 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 21:38:59.876028   14127 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 21:38:59.911543   14127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:38:59.911634   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:38:59.911686   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=addons-061866 minikube.k8s.io/updated_at=2023_07_17T21_38_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:00.191463   14127 ops.go:34] apiserver oom_adj: -16
	I0717 21:39:00.191595   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:00.808556   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:01.308909   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:01.808251   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:02.307939   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:02.808349   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:03.308536   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:03.808587   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:04.308555   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:04.808068   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:05.308150   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:05.808566   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:06.307980   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:06.808689   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:07.307896   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:07.808905   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:08.308902   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:08.808941   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:09.308477   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:09.808063   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:10.308507   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:10.808504   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:11.308761   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:11.808419   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:12.308509   14127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:39:12.510279   14127 kubeadm.go:1081] duration metric: took 12.598707588s to wait for elevateKubeSystemPrivileges.
	I0717 21:39:12.510312   14127 kubeadm.go:406] StartCluster complete in 25.489376229s
	I0717 21:39:12.510326   14127 settings.go:142] acquiring lock: {Name:mk1fbf08d023fa254cc49c7d4b178bb9eceb5e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:39:12.510471   14127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:39:12.510851   14127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-6542/kubeconfig: {Name:mk40e8a17d9a1cd0846f56b1c77fcc2de53c8c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:39:12.511046   14127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:39:12.511140   14127 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 21:39:12.511248   14127 config.go:182] Loaded profile config "addons-061866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:39:12.511265   14127 addons.go:69] Setting cloud-spanner=true in profile "addons-061866"
	I0717 21:39:12.511269   14127 addons.go:69] Setting ingress-dns=true in profile "addons-061866"
	I0717 21:39:12.511284   14127 addons.go:231] Setting addon ingress-dns=true in "addons-061866"
	I0717 21:39:12.511291   14127 addons.go:69] Setting gcp-auth=true in profile "addons-061866"
	I0717 21:39:12.511250   14127 addons.go:69] Setting volumesnapshots=true in profile "addons-061866"
	I0717 21:39:12.511308   14127 mustload.go:65] Loading cluster: addons-061866
	I0717 21:39:12.511310   14127 addons.go:231] Setting addon volumesnapshots=true in "addons-061866"
	I0717 21:39:12.511315   14127 addons.go:69] Setting default-storageclass=true in profile "addons-061866"
	I0717 21:39:12.511342   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.511355   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.511328   14127 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-061866"
	I0717 21:39:12.511365   14127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-061866"
	I0717 21:39:12.511378   14127 addons.go:69] Setting registry=true in profile "addons-061866"
	I0717 21:39:12.511393   14127 addons.go:231] Setting addon registry=true in "addons-061866"
	I0717 21:39:12.511432   14127 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-061866"
	I0717 21:39:12.511434   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.511473   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.511525   14127 config.go:182] Loaded profile config "addons-061866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:39:12.511782   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511788   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511804   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511804   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511802   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511822   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511834   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511835   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511841   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511860   14127 addons.go:69] Setting helm-tiller=true in profile "addons-061866"
	I0717 21:39:12.511255   14127 addons.go:69] Setting ingress=true in profile "addons-061866"
	I0717 21:39:12.511876   14127 addons.go:231] Setting addon helm-tiller=true in "addons-061866"
	I0717 21:39:12.511882   14127 addons.go:231] Setting addon ingress=true in "addons-061866"
	I0717 21:39:12.511285   14127 addons.go:231] Setting addon cloud-spanner=true in "addons-061866"
	I0717 21:39:12.511886   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511897   14127 addons.go:69] Setting inspektor-gadget=true in profile "addons-061866"
	I0717 21:39:12.511897   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.511905   14127 addons.go:231] Setting addon inspektor-gadget=true in "addons-061866"
	I0717 21:39:12.511358   14127 addons.go:69] Setting storage-provisioner=true in profile "addons-061866"
	I0717 21:39:12.511919   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.511929   14127 addons.go:231] Setting addon storage-provisioner=true in "addons-061866"
	I0717 21:39:12.511934   14127 addons.go:69] Setting metrics-server=true in profile "addons-061866"
	I0717 21:39:12.511953   14127 addons.go:231] Setting addon metrics-server=true in "addons-061866"
	I0717 21:39:12.512023   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512076   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512161   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512183   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512389   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512413   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.512442   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512464   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512472   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.512618   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.512662   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512692   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.512759   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512784   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.512872   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512900   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.512953   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.512992   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.532528   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0717 21:39:12.532860   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0717 21:39:12.532882   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0717 21:39:12.533016   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0717 21:39:12.533033   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.533327   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.533510   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.533534   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.533610   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.533656   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.533937   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.533954   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.534075   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.534086   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.534148   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.534276   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.534424   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.534854   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.534872   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.535300   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0717 21:39:12.535464   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.535489   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.535733   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.536147   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.536185   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.536758   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.536775   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.537172   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.537603   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.537618   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.537679   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.538890   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.539533   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.539577   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.541443   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0717 21:39:12.541783   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.542272   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.542290   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.542611   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.543162   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.543199   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.543442   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
	I0717 21:39:12.543885   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.544393   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.544414   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.544788   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.545326   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.545367   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.557602   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0717 21:39:12.558267   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.558785   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.558807   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.559132   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.559698   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.559902   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.560065   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0717 21:39:12.560499   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.561018   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.561070   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.561411   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.561575   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.561803   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40843
	I0717 21:39:12.562196   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.562741   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.562757   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.563112   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.563323   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.563801   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0717 21:39:12.564688   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.565743   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.565761   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.566430   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.566785   14127 addons.go:231] Setting addon default-storageclass=true in "addons-061866"
	I0717 21:39:12.566826   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.567175   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.567198   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.567386   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0717 21:39:12.567451   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.567496   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:12.567556   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.567813   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.567831   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.567833   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.571373   14127 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 21:39:12.568623   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.569809   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.570804   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0717 21:39:12.573241   14127 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 21:39:12.573253   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 21:39:12.573279   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.573322   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.573707   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.574353   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.574369   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.574442   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.575143   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.577184   14127 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 21:39:12.575516   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0717 21:39:12.576585   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.577294   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.579050   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.579245   14127 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:39:12.579255   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 21:39:12.579272   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.579415   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.580246   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.580273   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.580291   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.580318   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.580509   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.580715   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.580886   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.580926   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.581423   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.581442   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.581821   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.582020   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.584073   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.584982   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.587156   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 21:39:12.585525   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.585679   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.588668   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 21:39:12.588680   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 21:39:12.588700   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.588742   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.590515   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0717 21:39:12.591367   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.591580   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.591749   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.593003   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.594139   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.594159   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.594693   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.595379   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.595949   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.595985   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.596384   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0717 21:39:12.596552   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.596582   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.596763   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.596912   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.597019   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.597095   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.597721   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0717 21:39:12.598086   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.598562   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.598586   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.598816   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.598910   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.599427   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.599729   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0717 21:39:12.600134   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.600243   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.600262   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.600570   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.600677   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.602326   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 21:39:12.600956   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.601128   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.603506   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0717 21:39:12.603762   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.605363   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 21:39:12.604043   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43253
	I0717 21:39:12.604209   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.604312   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.605763   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.608370   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 21:39:12.607360   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.607609   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0717 21:39:12.607642   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.608125   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.612144   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 21:39:12.610168   14127 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 21:39:12.610300   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.610756   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.610861   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.611673   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0717 21:39:12.612496   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.614858   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 21:39:12.613493   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.613815   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.614090   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.614622   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.616824   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.618037   14127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:39:12.618059   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.618274   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.618659   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.619386   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.619225   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I0717 21:39:12.619362   14127 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 21:39:12.619650   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.619679   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.619727   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.620707   14127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:39:12.621046   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.621917   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 21:39:12.621167   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0717 21:39:12.622542   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:12.622664   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.622688   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.623391   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 21:39:12.623753   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.623800   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.624790   14127 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 21:39:12.624808   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 21:39:12.626337   14127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 21:39:12.624830   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.624848   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.624881   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:12.625482   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.626550   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.628167   14127 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:39:12.629100   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 21:39:12.629121   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.628502   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.630720   14127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 21:39:12.629317   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.629327   14127 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 21:39:12.629490   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.631371   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.632359   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.632469   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 21:39:12.632950   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.633214   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.633663   14127 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 21:39:12.633793   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.633860   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.634191   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.635017   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.635038   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.636422   14127 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 21:39:12.636437   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 21:39:12.636457   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.635110   14127 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 21:39:12.636499   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 21:39:12.636515   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.635124   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.636561   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.635132   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 21:39:12.636576   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.635315   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.635358   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.637914   14127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:39:12.637082   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.637308   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.638296   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.639173   14127 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:39:12.639316   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:39:12.639332   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.639652   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.639855   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.642107   14127 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 21:39:12.640571   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.641278   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.641930   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.642056   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.642305   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.643229   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.643719   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.643757   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.643777   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.643790   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.643800   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.643832   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.643866   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.643884   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.643900   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.644299   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.644318   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.644508   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.644555   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.644628   14127 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 21:39:12.644641   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 21:39:12.644651   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.644691   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.644702   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.644724   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.644851   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.644921   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.645008   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.645092   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.645217   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.645414   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.647730   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.648101   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.648121   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.648338   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.648494   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.648648   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.648835   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:12.649786   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0717 21:39:12.650160   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:12.650621   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:12.650635   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:12.650944   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:12.651150   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:12.653124   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:12.653340   14127 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:39:12.653355   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:39:12.653365   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:12.656198   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.656557   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:12.656583   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:12.656834   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:12.657008   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:12.657159   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:12.657310   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	W0717 21:39:12.658527   14127 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42146->192.168.39.55:22: read: connection reset by peer
	I0717 21:39:12.658547   14127 retry.go:31] will retry after 297.51298ms: ssh: handshake failed: read tcp 192.168.39.1:42146->192.168.39.55:22: read: connection reset by peer
	I0717 21:39:13.049689   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 21:39:13.103601   14127 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-061866" context rescaled to 1 replicas
	I0717 21:39:13.103653   14127 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 21:39:13.105865   14127 out.go:177] * Verifying Kubernetes components...
	I0717 21:39:13.107321   14127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:39:13.105166   14127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:39:13.115500   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:39:13.351091   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 21:39:13.351117   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 21:39:13.394660   14127 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 21:39:13.394689   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 21:39:13.398143   14127 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 21:39:13.398159   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 21:39:13.403142   14127 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 21:39:13.403157   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 21:39:13.410380   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:39:13.460921   14127 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 21:39:13.460943   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 21:39:13.500429   14127 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 21:39:13.500453   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 21:39:13.592045   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:39:13.604029   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:39:13.712327   14127 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:39:13.712349   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 21:39:13.798824   14127 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 21:39:13.798847   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 21:39:13.828410   14127 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 21:39:13.828439   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 21:39:13.882200   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 21:39:13.882226   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 21:39:13.885788   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:39:14.147996   14127 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 21:39:14.148026   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 21:39:14.172400   14127 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:39:14.172515   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 21:39:14.179473   14127 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 21:39:14.179498   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 21:39:14.273785   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 21:39:14.273815   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 21:39:14.331922   14127 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 21:39:14.331949   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 21:39:14.380054   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 21:39:14.380081   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 21:39:14.420371   14127 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:39:14.420394   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 21:39:14.434584   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 21:39:14.434611   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 21:39:14.446380   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:39:14.514525   14127 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 21:39:14.514556   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 21:39:14.531419   14127 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 21:39:14.531439   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 21:39:14.553339   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 21:39:14.553360   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 21:39:14.577179   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:39:14.595088   14127 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 21:39:14.595109   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 21:39:14.791906   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 21:39:14.791929   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 21:39:14.816030   14127 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:39:14.816052   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 21:39:14.938943   14127 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 21:39:14.938972   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 21:39:15.023994   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:39:15.124379   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 21:39:15.124398   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 21:39:15.348955   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 21:39:15.348975   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 21:39:15.502319   14127 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:39:15.502338   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 21:39:15.773006   14127 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:39:15.773031   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 21:39:15.773084   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:39:16.033830   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:39:17.622078   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.572350511s)
	I0717 21:39:17.622147   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:17.622163   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:17.622099   14127 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.514747454s)
	I0717 21:39:17.622396   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:17.622448   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:17.622465   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:17.622475   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:17.622721   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:17.622742   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:17.623188   14127 node_ready.go:35] waiting up to 6m0s for node "addons-061866" to be "Ready" ...
	I0717 21:39:17.631745   14127 node_ready.go:49] node "addons-061866" has status "Ready":"True"
	I0717 21:39:17.631771   14127 node_ready.go:38] duration metric: took 8.560911ms waiting for node "addons-061866" to be "Ready" ...
	I0717 21:39:17.631781   14127 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:39:17.640315   14127 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:17.682819   14127 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.575441657s)
	I0717 21:39:17.682843   14127 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 21:39:18.401339   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.285805665s)
	I0717 21:39:18.401392   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401407   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401421   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.991009857s)
	I0717 21:39:18.401468   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401486   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401487   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.809410092s)
	I0717 21:39:18.401508   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401524   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401694   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:18.401728   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:18.401764   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.401774   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.401786   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401794   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401800   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.401820   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.401834   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401847   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401872   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.401890   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.401900   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.401909   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.401983   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.401995   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.403415   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:18.403439   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:18.403464   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.403491   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.403639   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.403662   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.403674   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:18.403690   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:18.403946   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:18.403963   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:18.403946   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:19.233872   14127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 21:39:19.233906   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:19.236690   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:19.237025   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:19.237052   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:19.237222   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:19.237423   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:19.237605   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:19.237741   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:19.660435   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:20.327222   14127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 21:39:20.517913   14127 addons.go:231] Setting addon gcp-auth=true in "addons-061866"
	I0717 21:39:20.517976   14127 host.go:66] Checking if "addons-061866" exists ...
	I0717 21:39:20.518449   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:20.518486   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:20.532944   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I0717 21:39:20.533413   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:20.534076   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:20.534097   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:20.534466   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:20.534908   14127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:39:20.534945   14127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:39:20.549628   14127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0717 21:39:20.550070   14127 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:39:20.550612   14127 main.go:141] libmachine: Using API Version  1
	I0717 21:39:20.550636   14127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:39:20.550947   14127 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:39:20.551161   14127 main.go:141] libmachine: (addons-061866) Calling .GetState
	I0717 21:39:20.552833   14127 main.go:141] libmachine: (addons-061866) Calling .DriverName
	I0717 21:39:20.553110   14127 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 21:39:20.553138   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHHostname
	I0717 21:39:20.555612   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:20.555974   14127 main.go:141] libmachine: (addons-061866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:3c:50", ip: ""} in network mk-addons-061866: {Iface:virbr1 ExpiryTime:2023-07-17 22:38:27 +0000 UTC Type:0 Mac:52:54:00:a9:3c:50 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-061866 Clientid:01:52:54:00:a9:3c:50}
	I0717 21:39:20.556004   14127 main.go:141] libmachine: (addons-061866) DBG | domain addons-061866 has defined IP address 192.168.39.55 and MAC address 52:54:00:a9:3c:50 in network mk-addons-061866
	I0717 21:39:20.556160   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHPort
	I0717 21:39:20.556338   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHKeyPath
	I0717 21:39:20.556489   14127 main.go:141] libmachine: (addons-061866) Calling .GetSSHUsername
	I0717 21:39:20.556622   14127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/addons-061866/id_rsa Username:docker}
	I0717 21:39:21.660553   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:23.181417   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.577350292s)
	I0717 21:39:23.181460   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.295630863s)
	I0717 21:39:23.181495   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181494   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181501   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.735091523s)
	I0717 21:39:23.181507   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181511   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181522   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181532   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181650   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.604441606s)
	I0717 21:39:23.181675   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181686   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181742   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.181767   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.181775   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.181772   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.181784   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181786   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.181785   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.157756069s)
	I0717 21:39:23.181793   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181799   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181808   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	W0717 21:39:23.181811   14127 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:39:23.181828   14127 retry.go:31] will retry after 346.705903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:39:23.181918   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.181926   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.408813066s)
	I0717 21:39:23.181946   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.181955   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.181965   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.181974   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.181974   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.181985   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.181994   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.182002   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.182014   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.182024   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.182086   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.182105   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.182117   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.182127   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.182130   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.182138   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.182139   14127 addons.go:467] Verifying addon registry=true in "addons-061866"
	I0717 21:39:23.185437   14127 out.go:177] * Verifying registry addon...
	I0717 21:39:23.183900   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.183933   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.183943   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.183954   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.183965   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.183984   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.187001   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.187043   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.187070   14127 addons.go:467] Verifying addon ingress=true in "addons-061866"
	I0717 21:39:23.187098   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.187114   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:23.187127   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:23.188595   14127 out.go:177] * Verifying ingress addon...
	I0717 21:39:23.187046   14127 addons.go:467] Verifying addon metrics-server=true in "addons-061866"
	I0717 21:39:23.187423   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:23.187427   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:23.187829   14127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 21:39:23.190223   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:23.191109   14127 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 21:39:23.198766   14127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:39:23.198785   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:23.202059   14127 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 21:39:23.202077   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:23.529590   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:39:23.715543   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:23.715933   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:24.314460   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:24.314948   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:24.315191   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:24.646781   14127 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.093646887s)
	I0717 21:39:24.648606   14127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:39:24.648504   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.61461183s)
	I0717 21:39:24.650217   14127 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 21:39:24.652008   14127 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 21:39:24.650266   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:24.652046   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:24.652079   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 21:39:24.652505   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:24.652526   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:24.652562   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:24.652575   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:24.652858   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:24.652879   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:24.652894   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:24.652913   14127 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-061866"
	I0717 21:39:24.654676   14127 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 21:39:24.656896   14127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 21:39:24.726493   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:24.726651   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:24.727295   14127 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 21:39:24.727317   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:24.788821   14127 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 21:39:24.788850   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 21:39:24.893378   14127 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:39:24.893402   14127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 21:39:25.033036   14127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:39:25.205660   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:25.208948   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:25.239552   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:25.705080   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:25.713219   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:25.733741   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:26.204809   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:26.206354   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:26.233165   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:26.653356   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:26.719632   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:26.719715   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:26.736481   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:27.216753   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:27.217082   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:27.233500   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:27.610867   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.081223213s)
	I0717 21:39:27.610917   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:27.610931   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:27.611184   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:27.611201   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:27.611203   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:27.611210   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:27.611220   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:27.611409   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:27.611453   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:27.611469   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:27.704409   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:27.709298   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:27.737822   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:28.030243   14127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.997162461s)
	I0717 21:39:28.030299   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:28.030314   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:28.030776   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:28.030778   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:28.030837   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:28.030856   14127 main.go:141] libmachine: Making call to close driver server
	I0717 21:39:28.030870   14127 main.go:141] libmachine: (addons-061866) Calling .Close
	I0717 21:39:28.031121   14127 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:39:28.031138   14127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:39:28.031144   14127 main.go:141] libmachine: (addons-061866) DBG | Closing plugin on server side
	I0717 21:39:28.032506   14127 addons.go:467] Verifying addon gcp-auth=true in "addons-061866"
	I0717 21:39:28.034188   14127 out.go:177] * Verifying gcp-auth addon...
	I0717 21:39:28.036529   14127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 21:39:28.040320   14127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 21:39:28.040337   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:28.207669   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:28.216278   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:28.234751   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:28.545122   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:28.658080   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:28.703970   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:28.707165   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:28.734521   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:29.044350   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:29.203860   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:29.206752   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:29.236356   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:29.544147   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:29.703357   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:29.706333   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:29.732973   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:30.045326   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:30.204243   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:30.206743   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:30.248669   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:30.545180   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:30.662079   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:30.703705   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:30.706369   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:30.733214   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:31.043887   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:31.204526   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:31.208208   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:31.232854   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:31.544298   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:31.705382   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:31.708366   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:31.732945   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:32.043710   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:32.204427   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:32.207285   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:32.233007   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:32.544264   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:32.859644   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:32.859671   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:32.860828   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:32.862350   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:33.045218   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:33.207939   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:33.208175   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:33.233371   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:33.544115   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:33.703902   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:33.707451   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:33.732942   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:34.044300   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:34.207380   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:34.209663   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:34.233795   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:34.547090   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:34.704896   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:34.707992   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:34.737661   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:35.044833   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:35.154548   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:35.204434   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:35.208794   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:35.235856   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:35.544562   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:35.704992   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:35.708079   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:35.732717   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:36.044415   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:36.204381   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:36.208043   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:36.232912   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:36.544918   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:36.704853   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:36.707266   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:36.736570   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:37.044959   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:37.160398   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:37.204109   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:37.207343   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:37.234069   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:37.544338   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:37.705336   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:37.707819   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:37.734939   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:38.046229   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:38.204297   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:38.207264   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:38.233747   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:38.546306   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:38.706033   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:38.708729   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:38.734833   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:39.044024   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:39.204190   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:39.206707   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:39.233737   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:39.545456   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:39.653576   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:39.704495   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:39.708058   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:39.732988   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:40.044625   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:40.203534   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:40.207069   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:40.235646   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:40.544513   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:40.706200   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:40.707383   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:40.734492   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:41.044581   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:41.205019   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:41.208564   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:41.234750   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:41.696648   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:41.702944   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:41.706708   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:41.714586   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:41.734160   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:42.044402   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:42.205011   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:42.208418   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:42.234058   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:42.544356   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:42.704049   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:42.706648   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:42.733417   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:43.045453   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:43.208069   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:43.211827   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:43.239180   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:43.545776   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:43.704198   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:43.707151   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:43.733910   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:44.044445   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:44.165954   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:44.204878   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:44.208824   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:44.233037   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:44.544581   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:44.705022   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:44.708859   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:44.735384   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:45.044658   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:45.205946   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:45.210781   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:45.234390   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:45.545817   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:45.706703   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:45.718938   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:45.741780   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:46.044707   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:46.204801   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:46.207937   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:46.233058   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:46.544106   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:46.653524   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:46.704841   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:46.707722   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:46.733574   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:47.044852   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:47.203775   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:47.206770   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:47.233295   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:47.544292   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:47.707896   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:47.710139   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:47.733556   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:48.122326   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:48.206589   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:48.209616   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:48.235087   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:48.544373   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:48.655413   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:48.704010   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:48.706158   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:48.734572   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:49.045191   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:49.204762   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:49.207180   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:49.233190   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:49.559555   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:49.706252   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:49.710133   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:49.737422   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:50.044020   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:50.206171   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:50.208607   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:50.233114   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:50.545176   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:50.704630   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:50.707630   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:50.734396   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:51.044137   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:51.155103   14127 pod_ready.go:102] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"False"
	I0717 21:39:51.205461   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:51.207925   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:51.233406   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:51.545199   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:51.909578   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:51.909683   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:51.912128   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:52.044286   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:52.204304   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:52.208118   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:52.237606   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:52.545477   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:52.654925   14127 pod_ready.go:92] pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:52.654944   14127 pod_ready.go:81] duration metric: took 35.014603689s waiting for pod "coredns-5d78c9869d-cwlf8" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.654954   14127 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-p84v9" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.657312   14127 pod_ready.go:97] error getting pod "coredns-5d78c9869d-p84v9" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-p84v9" not found
	I0717 21:39:52.657339   14127 pod_ready.go:81] duration metric: took 2.377954ms waiting for pod "coredns-5d78c9869d-p84v9" in "kube-system" namespace to be "Ready" ...
	E0717 21:39:52.657350   14127 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-p84v9" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-p84v9" not found
	I0717 21:39:52.657359   14127 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.663045   14127 pod_ready.go:92] pod "etcd-addons-061866" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:52.663067   14127 pod_ready.go:81] duration metric: took 5.700033ms waiting for pod "etcd-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.663080   14127 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.668638   14127 pod_ready.go:92] pod "kube-apiserver-addons-061866" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:52.668654   14127 pod_ready.go:81] duration metric: took 5.567241ms waiting for pod "kube-apiserver-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.668662   14127 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.684720   14127 pod_ready.go:92] pod "kube-controller-manager-addons-061866" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:52.684745   14127 pod_ready.go:81] duration metric: took 16.072593ms waiting for pod "kube-controller-manager-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.684756   14127 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rwnfr" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.705253   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:52.709873   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:52.734101   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:52.852208   14127 pod_ready.go:92] pod "kube-proxy-rwnfr" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:52.852236   14127 pod_ready.go:81] duration metric: took 167.473712ms waiting for pod "kube-proxy-rwnfr" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:52.852245   14127 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:53.045166   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:53.204060   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:53.206914   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:53.234802   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:53.251629   14127 pod_ready.go:92] pod "kube-scheduler-addons-061866" in "kube-system" namespace has status "Ready":"True"
	I0717 21:39:53.251687   14127 pod_ready.go:81] duration metric: took 399.409398ms waiting for pod "kube-scheduler-addons-061866" in "kube-system" namespace to be "Ready" ...
	I0717 21:39:53.251701   14127 pod_ready.go:38] duration metric: took 35.619906181s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:39:53.251722   14127 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:39:53.251788   14127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:39:53.277190   14127 api_server.go:72] duration metric: took 40.173498911s to wait for apiserver process to appear ...
	I0717 21:39:53.277214   14127 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:39:53.277242   14127 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0717 21:39:53.285167   14127 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0717 21:39:53.286234   14127 api_server.go:141] control plane version: v1.27.3
	I0717 21:39:53.286252   14127 api_server.go:131] duration metric: took 9.031587ms to wait for apiserver health ...
	I0717 21:39:53.286259   14127 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:39:53.456430   14127 system_pods.go:59] 17 kube-system pods found
	I0717 21:39:53.456466   14127 system_pods.go:61] "coredns-5d78c9869d-cwlf8" [15c9136c-f7f4-492a-8b39-8b0ac02a97c2] Running
	I0717 21:39:53.456478   14127 system_pods.go:61] "csi-hostpath-attacher-0" [bf4f5331-0d70-4f2e-a4e4-8830b255841b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:39:53.456487   14127 system_pods.go:61] "csi-hostpath-resizer-0" [e640e275-429e-4703-a494-bc1fd903c6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:39:53.456495   14127 system_pods.go:61] "csi-hostpathplugin-4cvtv" [a73439b5-fb6c-436e-8176-8d2335b1add1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:39:53.456502   14127 system_pods.go:61] "etcd-addons-061866" [e103db6e-14e2-4b4e-b5ef-a8249f1da40b] Running
	I0717 21:39:53.456507   14127 system_pods.go:61] "kube-apiserver-addons-061866" [f01131ad-4958-40b0-a1b7-6afa6d5c7cea] Running
	I0717 21:39:53.456511   14127 system_pods.go:61] "kube-controller-manager-addons-061866" [8f79f96c-332a-4517-a78e-79df150310b4] Running
	I0717 21:39:53.456518   14127 system_pods.go:61] "kube-ingress-dns-minikube" [f532598d-f7ef-4462-af79-ac6ce14ff21f] Running
	I0717 21:39:53.456522   14127 system_pods.go:61] "kube-proxy-rwnfr" [f6c17fa8-2d58-4e5c-97b8-96d4c25969ac] Running
	I0717 21:39:53.456526   14127 system_pods.go:61] "kube-scheduler-addons-061866" [7e066cee-abe1-4d2e-8ddf-d801acebe0be] Running
	I0717 21:39:53.456530   14127 system_pods.go:61] "metrics-server-844d8db974-22fzn" [ea3c4280-fe08-4041-826d-ed2440bd17d9] Running
	I0717 21:39:53.456536   14127 system_pods.go:61] "registry-proxy-stqnf" [a00c0e48-1f03-43df-af79-4e00a7720ba7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:39:53.456544   14127 system_pods.go:61] "registry-w6gzd" [2da542cc-0709-40bc-b84b-896cc24ab425] Running
	I0717 21:39:53.456550   14127 system_pods.go:61] "snapshot-controller-75bbb956b9-8kbj2" [827919d2-3bbf-4170-a72c-c5e48806f663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:39:53.456556   14127 system_pods.go:61] "snapshot-controller-75bbb956b9-n6ghc" [680a0697-77ca-49f2-8536-4c2efb517271] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:39:53.456564   14127 system_pods.go:61] "storage-provisioner" [410f3de6-9b7a-41a2-8a7c-419bacb96b41] Running
	I0717 21:39:53.456569   14127 system_pods.go:61] "tiller-deploy-6847666dc-gtmd8" [a569b132-a101-4fd9-b551-518fa3e6b80e] Running
	I0717 21:39:53.456574   14127 system_pods.go:74] duration metric: took 170.310435ms to wait for pod list to return data ...
	I0717 21:39:53.456583   14127 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:39:53.548552   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:53.658506   14127 default_sa.go:45] found service account: "default"
	I0717 21:39:53.658536   14127 default_sa.go:55] duration metric: took 201.947983ms for default service account to be created ...
	I0717 21:39:53.658548   14127 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:39:53.711336   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:53.711703   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:53.735616   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:53.858792   14127 system_pods.go:86] 17 kube-system pods found
	I0717 21:39:53.858832   14127 system_pods.go:89] "coredns-5d78c9869d-cwlf8" [15c9136c-f7f4-492a-8b39-8b0ac02a97c2] Running
	I0717 21:39:53.858846   14127 system_pods.go:89] "csi-hostpath-attacher-0" [bf4f5331-0d70-4f2e-a4e4-8830b255841b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:39:53.858861   14127 system_pods.go:89] "csi-hostpath-resizer-0" [e640e275-429e-4703-a494-bc1fd903c6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:39:53.858874   14127 system_pods.go:89] "csi-hostpathplugin-4cvtv" [a73439b5-fb6c-436e-8176-8d2335b1add1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:39:53.858882   14127 system_pods.go:89] "etcd-addons-061866" [e103db6e-14e2-4b4e-b5ef-a8249f1da40b] Running
	I0717 21:39:53.858890   14127 system_pods.go:89] "kube-apiserver-addons-061866" [f01131ad-4958-40b0-a1b7-6afa6d5c7cea] Running
	I0717 21:39:53.858898   14127 system_pods.go:89] "kube-controller-manager-addons-061866" [8f79f96c-332a-4517-a78e-79df150310b4] Running
	I0717 21:39:53.858908   14127 system_pods.go:89] "kube-ingress-dns-minikube" [f532598d-f7ef-4462-af79-ac6ce14ff21f] Running
	I0717 21:39:53.858915   14127 system_pods.go:89] "kube-proxy-rwnfr" [f6c17fa8-2d58-4e5c-97b8-96d4c25969ac] Running
	I0717 21:39:53.858930   14127 system_pods.go:89] "kube-scheduler-addons-061866" [7e066cee-abe1-4d2e-8ddf-d801acebe0be] Running
	I0717 21:39:53.858937   14127 system_pods.go:89] "metrics-server-844d8db974-22fzn" [ea3c4280-fe08-4041-826d-ed2440bd17d9] Running
	I0717 21:39:53.858946   14127 system_pods.go:89] "registry-proxy-stqnf" [a00c0e48-1f03-43df-af79-4e00a7720ba7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:39:53.858956   14127 system_pods.go:89] "registry-w6gzd" [2da542cc-0709-40bc-b84b-896cc24ab425] Running
	I0717 21:39:53.858966   14127 system_pods.go:89] "snapshot-controller-75bbb956b9-8kbj2" [827919d2-3bbf-4170-a72c-c5e48806f663] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:39:53.858979   14127 system_pods.go:89] "snapshot-controller-75bbb956b9-n6ghc" [680a0697-77ca-49f2-8536-4c2efb517271] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:39:53.858985   14127 system_pods.go:89] "storage-provisioner" [410f3de6-9b7a-41a2-8a7c-419bacb96b41] Running
	I0717 21:39:53.858995   14127 system_pods.go:89] "tiller-deploy-6847666dc-gtmd8" [a569b132-a101-4fd9-b551-518fa3e6b80e] Running
	I0717 21:39:53.859003   14127 system_pods.go:126] duration metric: took 200.449894ms to wait for k8s-apps to be running ...
	I0717 21:39:53.859014   14127 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:39:53.859070   14127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:39:53.873012   14127 system_svc.go:56] duration metric: took 13.988596ms WaitForService to wait for kubelet.
	I0717 21:39:53.873039   14127 kubeadm.go:581] duration metric: took 40.769355756s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:39:53.873066   14127 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:39:54.045259   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:54.051053   14127 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 21:39:54.051093   14127 node_conditions.go:123] node cpu capacity is 2
	I0717 21:39:54.051109   14127 node_conditions.go:105] duration metric: took 178.038055ms to run NodePressure ...
	I0717 21:39:54.051123   14127 start.go:228] waiting for startup goroutines ...
	I0717 21:39:54.203465   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:54.207191   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:54.233512   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:54.656168   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:54.709734   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:54.714166   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:54.733051   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:55.044828   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:55.204251   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:55.207515   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:55.234108   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:55.544383   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:55.704696   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:55.708446   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:55.737339   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:56.044491   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:56.204314   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:56.207935   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:56.233002   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:56.545382   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:56.704492   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:56.708370   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:56.733726   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:57.044265   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:57.206418   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:57.207947   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:57.232930   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:57.543857   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:57.705613   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:57.707374   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:57.734605   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:58.046107   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:58.204479   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:58.207675   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:58.237224   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:58.546622   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:58.707148   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:58.710026   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:58.734659   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:59.045364   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:59.204262   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:59.208857   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:59.234028   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:39:59.544600   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:39:59.705033   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:39:59.706954   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:39:59.736341   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:00.044454   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:00.206471   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:00.210678   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:00.234888   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:00.545896   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:00.807251   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:00.808727   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:00.812006   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:01.045741   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:01.205313   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:01.207483   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:01.233033   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:01.545550   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:01.715643   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:01.716850   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:01.733891   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:02.044145   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:02.204890   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:02.207368   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:02.233047   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:02.547684   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:02.706730   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:02.711133   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:02.735310   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:03.046701   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:03.206433   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:03.209954   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:03.231948   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:03.543972   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:03.703339   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:03.707114   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:03.734769   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:04.046588   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:04.204506   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:40:04.207922   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:04.233209   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:04.544569   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:04.704278   14127 kapi.go:107] duration metric: took 41.516445005s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 21:40:04.709215   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:04.733017   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:05.043869   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:05.208823   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:05.235126   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:05.544876   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:05.707398   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:05.736175   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:06.045709   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:06.210162   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:06.234127   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:06.544542   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:06.707541   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:06.741883   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:07.044511   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:07.210555   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:07.235382   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:07.546240   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:07.706536   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:07.733650   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:08.263614   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:08.264165   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:08.266558   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:08.545454   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:08.707499   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:08.734076   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:09.044505   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:09.207630   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:09.234037   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:09.548240   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:09.707199   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:09.738564   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:10.045939   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:10.207364   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:10.233525   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:10.544135   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:10.707800   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:10.737602   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:11.044475   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:11.209546   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:11.233674   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:11.544759   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:11.709812   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:11.734119   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:12.045433   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:12.207425   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:12.233851   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:12.545157   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:12.707489   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:12.733356   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:13.044854   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:13.207360   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:13.233861   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:40:13.545058   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:13.713095   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:13.732813   14127 kapi.go:107] duration metric: took 49.075912302s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 21:40:14.045045   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:14.331252   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:14.545113   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:14.709078   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:15.044409   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:15.207618   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:15.545038   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:15.708289   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:16.044711   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:16.207968   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:16.545045   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:16.707472   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:17.044647   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:17.209055   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:17.544979   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:17.707880   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:18.044176   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:18.208085   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:18.545110   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:18.708413   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:19.045903   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:19.207855   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:19.545488   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:19.706604   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:20.045466   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:20.207767   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:20.543963   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:20.707552   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:21.045124   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:21.206841   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:21.544994   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:21.709290   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:22.044524   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:22.207113   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:22.544544   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:22.706839   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:23.044741   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:23.207589   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:23.545316   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:23.707301   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:24.044574   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:24.207444   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:24.545167   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:24.709557   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:25.045089   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:25.207906   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:25.544280   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:25.706950   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:26.044511   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:26.207893   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:26.545059   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:26.709842   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:27.045101   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:27.207802   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:27.545647   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:27.707936   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:28.045034   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:28.208576   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:28.545720   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:28.719908   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:29.044459   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:29.208413   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:29.546023   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:29.708531   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:30.045025   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:30.207686   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:30.545423   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:30.707017   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:31.044169   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:31.207441   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:31.544643   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:31.707498   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:32.355235   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:32.355379   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:32.545289   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:32.706575   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:33.044636   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:33.207359   14127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:40:33.544660   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:33.710767   14127 kapi.go:107] duration metric: took 1m10.519654483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 21:40:34.044673   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:34.544706   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:35.045203   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:35.544867   14127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:40:36.044504   14127 kapi.go:107] duration metric: took 1m8.007974466s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 21:40:36.046596   14127 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-061866 cluster.
	I0717 21:40:36.048333   14127 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 21:40:36.049824   14127 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 21:40:36.051400   14127 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, helm-tiller, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 21:40:36.052839   14127 addons.go:502] enable addons completed in 1m23.541701504s: enabled=[cloud-spanner storage-provisioner ingress-dns default-storageclass helm-tiller metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 21:40:36.052872   14127 start.go:233] waiting for cluster config update ...
	I0717 21:40:36.052892   14127 start.go:242] writing updated cluster config ...
	I0717 21:40:36.053116   14127 ssh_runner.go:195] Run: rm -f paused
	I0717 21:40:36.105755   14127 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 21:40:36.107682   14127 out.go:177] * Done! kubectl is now configured to use "addons-061866" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID
	3715ab16d3d15       98f6c3b32d565       2 seconds ago        Exited              helm-test                                0                   98acab28e1087
	d15c3d4e2bae0       beae173ccac6a       4 seconds ago        Exited              registry-test                            0                   88325e3d24eee
	a97d249c66707       021283c8eb95b       5 seconds ago        Running             task-pv-container                        0                   dee8b64b15aa2
	3a11af5ab3235       0a2cf7f083e9c       13 seconds ago       Running             headlamp                                 0                   0852b969e03a6
	00bbdd1b3bf6d       6d2a98b274382       22 seconds ago       Running             gcp-auth                                 0                   4b7fdd1e4455a
	3659d5452cd7e       825aff16c20cc       24 seconds ago       Running             controller                               0                   d15d942359ccb
	8db02afe3d545       738351fd438f0       44 seconds ago       Running             csi-snapshotter                          0                   0055d17bef065
	f95e0e3a3ae2c       931dbfd16f87c       45 seconds ago       Running             csi-provisioner                          0                   0055d17bef065
	ba4ab0c92ecd0       e899260153aed       47 seconds ago       Running             liveness-probe                           0                   0055d17bef065
	9baaca47bd3f1       e255e073c508c       48 seconds ago       Running             hostpath                                 0                   0055d17bef065
	d49828f6415a1       88ef14a257f42       50 seconds ago       Running             node-driver-registrar                    0                   0055d17bef065
	acc86b3fc8403       7e7451bb70423       50 seconds ago       Exited              patch                                    1                   c9481e598bfc6
	8a234ad5ca24a       7e7451bb70423       51 seconds ago       Exited              patch                                    0                   138f6f8a251a2
	bc75ff617e402       7e7451bb70423       51 seconds ago       Exited              create                                   0                   9031e3d7b9ea3
	87299338601a1       7e7451bb70423       51 seconds ago       Exited              create                                   0                   dabe3d5f91079
	079cd56ac82fd       aa61ee9c70bc4       53 seconds ago       Running             volume-snapshot-controller               0                   64ccecf436cd9
	1ef7b3cb180c3       aa61ee9c70bc4       58 seconds ago       Running             volume-snapshot-controller               0                   43b5c967b5388
	61de3533a1b16       19a639eda60f0       59 seconds ago       Running             csi-resizer                              0                   5fff373239980
	e684f1e85e82d       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   0055d17bef065
	74dedc42bfc91       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   bc1d413f38f8a
	12b506741d97c       e4cd98cf7c471       About a minute ago   Running             gadget                                   0                   4f0c729005892
	1f08d9322f447       817bbe3f2e517       About a minute ago   Running             metrics-server                           0                   813a1235e8e08
	f4d37308b6c0d       3f39089e90831       About a minute ago   Running             tiller                                   0                   4fc1077529393
	6295a6b255349       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   96432fc41d0fc
	9b55249ffe6ce       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   a13487c83248c
	d4060b2bcf048       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   e017f3de8096f
	a2dd7da36eb51       5780543258cf0       About a minute ago   Running             kube-proxy                               0                   90c51b427db78
	38cd1c7e4c690       86b6af7dd652c       2 minutes ago        Running             etcd                                     0                   facaf34558912
	12c61943f7013       41697ceeb70b3       2 minutes ago        Running             kube-scheduler                           0                   8ddf917c7f491
	8dd823dc58ee1       08a0c939e61b7       2 minutes ago        Running             kube-apiserver                           0                   e7878e7a2eb01
	a78fef0dbf515       7cffc01dba0e1       2 minutes ago        Running             kube-controller-manager                  0                   dcbf51bdf7e77
	
	* 
	* ==> containerd <==
	* -- Journal begins at Mon 2023-07-17 21:38:23 UTC, ends at Mon 2023-07-17 21:40:57 UTC. --
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.198959904Z" level=info msg="Container to stop \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.248661003Z" level=info msg="shim disconnected" id=9ad7151b0d387bd362bc935d1a88e58c05075d2aad7d2dc8e8488baaaf2740ab namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.250046514Z" level=warning msg="cleaning up after shim disconnected" id=9ad7151b0d387bd362bc935d1a88e58c05075d2aad7d2dc8e8488baaaf2740ab namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.250168900Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.335164332Z" level=info msg="shim disconnected" id=cfe9125d1b4a684cd32aeab0ca61a78a2325b3aa6ca7d7bdc9a572ab0888225c namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.335243015Z" level=warning msg="cleaning up after shim disconnected" id=cfe9125d1b4a684cd32aeab0ca61a78a2325b3aa6ca7d7bdc9a572ab0888225c namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.335255326Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.422416425Z" level=info msg="TearDown network for sandbox \"9ad7151b0d387bd362bc935d1a88e58c05075d2aad7d2dc8e8488baaaf2740ab\" successfully"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.422598525Z" level=info msg="StopPodSandbox for \"9ad7151b0d387bd362bc935d1a88e58c05075d2aad7d2dc8e8488baaaf2740ab\" returns successfully"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.428966010Z" level=info msg="Finish port forwarding for \"4fc1077529393fed177e6647e63f3ec0656ad2f203e041fa874d690975cdbc63\" port 44134"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.533346537Z" level=info msg="TearDown network for sandbox \"cfe9125d1b4a684cd32aeab0ca61a78a2325b3aa6ca7d7bdc9a572ab0888225c\" successfully"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.533640699Z" level=info msg="StopPodSandbox for \"cfe9125d1b4a684cd32aeab0ca61a78a2325b3aa6ca7d7bdc9a572ab0888225c\" returns successfully"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.945686958Z" level=info msg="RemoveContainer for \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\""
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.953415461Z" level=info msg="StopPodSandbox for \"98acab28e1087e6c83ceb088e35a96a2f850b0b28b4a7710b942fe50e54c1f86\""
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.953767527Z" level=info msg="Container to stop \"3715ab16d3d15db3c07b9df1a8fe9234c20ace28442871db1e3988a11494a21f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.970210966Z" level=info msg="RemoveContainer for \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\" returns successfully"
	Jul 17 21:40:56 addons-061866 containerd[681]: time="2023-07-17T21:40:56.987773946Z" level=error msg="ContainerStatus for \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\": not found"
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.007391335Z" level=info msg="RemoveContainer for \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\""
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.039123019Z" level=info msg="RemoveContainer for \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\" returns successfully"
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.041186300Z" level=error msg="ContainerStatus for \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\": not found"
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.069715545Z" level=info msg="shim disconnected" id=98acab28e1087e6c83ceb088e35a96a2f850b0b28b4a7710b942fe50e54c1f86 namespace=k8s.io
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.069961761Z" level=warning msg="cleaning up after shim disconnected" id=98acab28e1087e6c83ceb088e35a96a2f850b0b28b4a7710b942fe50e54c1f86 namespace=k8s.io
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.070176858Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.158364806Z" level=info msg="TearDown network for sandbox \"98acab28e1087e6c83ceb088e35a96a2f850b0b28b4a7710b942fe50e54c1f86\" successfully"
	Jul 17 21:40:57 addons-061866 containerd[681]: time="2023-07-17T21:40:57.158540210Z" level=info msg="StopPodSandbox for \"98acab28e1087e6c83ceb088e35a96a2f850b0b28b4a7710b942fe50e54c1f86\" returns successfully"
	
	* 
	* ==> coredns [d4060b2bcf0480e16f74e1a05450c31935d145b754d799b39988d9c766ad7962] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49880 - 10621 "HINFO IN 2307305504557562327.634613874604375857. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.059910879s
	[INFO] 10.244.0.19:60172 - 8966 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000275396s
	[INFO] 10.244.0.19:51071 - 637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00038201s
	[INFO] 10.244.0.19:48223 - 15642 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000066096s
	[INFO] 10.244.0.19:47949 - 44607 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063569s
	[INFO] 10.244.0.19:43181 - 62801 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066982s
	[INFO] 10.244.0.19:50724 - 43416 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008279s
	[INFO] 10.244.0.19:32873 - 55965 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000718567s
	[INFO] 10.244.0.19:54968 - 18547 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000733589s
	[INFO] 10.244.0.22:44690 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000283287s
	[INFO] 10.244.0.22:56343 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018688s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-061866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-061866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=addons-061866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_38_59_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-061866
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-061866"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:38:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-061866
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:40:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:40:31 +0000   Mon, 17 Jul 2023 21:38:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:40:31 +0000   Mon, 17 Jul 2023 21:38:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:40:31 +0000   Mon, 17 Jul 2023 21:38:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:40:31 +0000   Mon, 17 Jul 2023 21:39:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    addons-061866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5029a0cd3c0047ada105b8f145361db3
	  System UUID:                5029a0cd-3c00-47ad-a105-b8f145361db3
	  Boot ID:                    adaf8f3c-ef40-4405-b08c-4b28e51961f0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  gadget                      gadget-7prjj                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  gcp-auth                    gcp-auth-58478865f7-brd5g                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  headlamp                    headlamp-66f6498c69-dqqgl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  ingress-nginx               ingress-nginx-controller-7799c6795f-sc45q    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         95s
	  kube-system                 coredns-5d78c9869d-cwlf8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     105s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 csi-hostpathplugin-4cvtv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 etcd-addons-061866                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-addons-061866                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-addons-061866        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-rwnfr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-addons-061866                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 metrics-server-844d8db974-22fzn              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 snapshot-controller-75bbb956b9-8kbj2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 snapshot-controller-75bbb956b9-n6ghc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 tiller-deploy-6847666dc-gtmd8                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (12%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m7s)  kubelet          Node addons-061866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m7s)  kubelet          Node addons-061866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m7s)  kubelet          Node addons-061866 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node addons-061866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node addons-061866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node addons-061866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                117s                 kubelet          Node addons-061866 status is now: NodeReady
	  Normal  RegisteredNode           106s                 node-controller  Node addons-061866 event: Registered Node addons-061866 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.432111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.357937] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153551] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.080216] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.559144] systemd-fstab-generator[549]: Ignoring "noauto" for root device
	[  +0.116831] systemd-fstab-generator[560]: Ignoring "noauto" for root device
	[  +0.147283] systemd-fstab-generator[573]: Ignoring "noauto" for root device
	[  +0.101782] systemd-fstab-generator[584]: Ignoring "noauto" for root device
	[  +0.277079] systemd-fstab-generator[611]: Ignoring "noauto" for root device
	[  +5.823918] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +5.529281] systemd-fstab-generator[832]: Ignoring "noauto" for root device
	[  +9.255414] systemd-fstab-generator[1194]: Ignoring "noauto" for root device
	[Jul17 21:39] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.032015] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.450767] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.859293] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 21:40] kauditd_printk_skb: 16 callbacks suppressed
	[ +18.866635] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.266821] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.083176] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.044654] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.442459] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [38cd1c7e4c690e225d0d7c9edbd7d0c643ab2da29455e750f9ff0cbe84737b11] <==
	* {"level":"warn","ts":"2023-07-17T21:40:08.255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.820538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:40:08.255Z","caller":"traceutil/trace.go:171","msg":"trace[1935863175] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:980; }","duration":"261.841002ms","start":"2023-07-17T21:40:07.993Z","end":"2023-07-17T21:40:08.255Z","steps":["trace[1935863175] 'agreement among raft nodes before linearized reading'  (duration: 261.803184ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:14.324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.762109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13759"}
	{"level":"info","ts":"2023-07-17T21:40:14.324Z","caller":"traceutil/trace.go:171","msg":"trace[585644108] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1034; }","duration":"121.859875ms","start":"2023-07-17T21:40:14.202Z","end":"2023-07-17T21:40:14.324Z","steps":["trace[585644108] 'range keys from in-memory index tree'  (duration: 121.457557ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:32.348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.036735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-07-17T21:40:32.348Z","caller":"traceutil/trace.go:171","msg":"trace[513869160] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1062; }","duration":"120.137508ms","start":"2023-07-17T21:40:32.228Z","end":"2023-07-17T21:40:32.348Z","steps":["trace[513869160] 'range keys from in-memory index tree'  (duration: 119.91721ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:32.348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.526374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:40:32.348Z","caller":"traceutil/trace.go:171","msg":"trace[2067847338] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1062; }","duration":"355.605727ms","start":"2023-07-17T21:40:31.993Z","end":"2023-07-17T21:40:32.348Z","steps":["trace[2067847338] 'range keys from in-memory index tree'  (duration: 355.422355ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:32.348Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:40:31.993Z","time spent":"355.64485ms","remote":"127.0.0.1:52808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-07-17T21:40:32.348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.771848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10851"}
	{"level":"info","ts":"2023-07-17T21:40:32.349Z","caller":"traceutil/trace.go:171","msg":"trace[1689298719] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1062; }","duration":"307.824266ms","start":"2023-07-17T21:40:32.041Z","end":"2023-07-17T21:40:32.349Z","steps":["trace[1689298719] 'range keys from in-memory index tree'  (duration: 307.635589ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:32.349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:40:32.041Z","time spent":"307.860378ms","remote":"127.0.0.1:52844","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10874,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2023-07-17T21:40:32.349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.93835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13759"}
	{"level":"info","ts":"2023-07-17T21:40:32.349Z","caller":"traceutil/trace.go:171","msg":"trace[418613267] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1062; }","duration":"146.987923ms","start":"2023-07-17T21:40:32.202Z","end":"2023-07-17T21:40:32.349Z","steps":["trace[418613267] 'range keys from in-memory index tree'  (duration: 146.821954ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:40:37.375Z","caller":"traceutil/trace.go:171","msg":"trace[1904422235] linearizableReadLoop","detail":"{readStateIndex:1131; appliedIndex:1130; }","duration":"219.671061ms","start":"2023-07-17T21:40:37.155Z","end":"2023-07-17T21:40:37.375Z","steps":["trace[1904422235] 'read index received'  (duration: 219.51788ms)","trace[1904422235] 'applied index is now lower than readState.Index'  (duration: 152.797µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:40:37.375Z","caller":"traceutil/trace.go:171","msg":"trace[1306902404] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"357.852555ms","start":"2023-07-17T21:40:37.017Z","end":"2023-07-17T21:40:37.375Z","steps":["trace[1306902404] 'process raft request'  (duration: 357.559591ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:37.375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:40:37.017Z","time spent":"357.894225ms","remote":"127.0.0.1:52834","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":454,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc\" mod_revision:0 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc\" value_size:401 >> failure:<>"}
	{"level":"warn","ts":"2023-07-17T21:40:37.375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.671849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78343"}
	{"level":"info","ts":"2023-07-17T21:40:37.375Z","caller":"traceutil/trace.go:171","msg":"trace[1305944717] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1095; }","duration":"218.716838ms","start":"2023-07-17T21:40:37.157Z","end":"2023-07-17T21:40:37.375Z","steps":["trace[1305944717] 'agreement among raft nodes before linearized reading'  (duration: 218.500736ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:37.376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.529949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:3070"}
	{"level":"info","ts":"2023-07-17T21:40:37.376Z","caller":"traceutil/trace.go:171","msg":"trace[1650233830] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1095; }","duration":"220.557911ms","start":"2023-07-17T21:40:37.155Z","end":"2023-07-17T21:40:37.376Z","steps":["trace[1650233830] 'agreement among raft nodes before linearized reading'  (duration: 220.41815ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:40:42.536Z","caller":"traceutil/trace.go:171","msg":"trace[706992050] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1191; }","duration":"175.914872ms","start":"2023-07-17T21:40:42.360Z","end":"2023-07-17T21:40:42.536Z","steps":["trace[706992050] 'read index received'  (duration: 175.756948ms)","trace[706992050] 'applied index is now lower than readState.Index'  (duration: 157.512µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:40:42.537Z","caller":"traceutil/trace.go:171","msg":"trace[1655906162] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"278.809784ms","start":"2023-07-17T21:40:42.258Z","end":"2023-07-17T21:40:42.537Z","steps":["trace[1655906162] 'process raft request'  (duration: 277.979637ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:40:42.537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.896202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T21:40:42.537Z","caller":"traceutil/trace.go:171","msg":"trace[1181141820] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1154; }","duration":"176.961452ms","start":"2023-07-17T21:40:42.360Z","end":"2023-07-17T21:40:42.537Z","steps":["trace[1181141820] 'agreement among raft nodes before linearized reading'  (duration: 176.853772ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [00bbdd1b3bf6d3cb34508062e2d6a695d95ac31544695997bf66d204334994c9] <==
	* 2023/07/17 21:40:35 GCP Auth Webhook started!
	2023/07/17 21:40:37 Ready to marshal response ...
	2023/07/17 21:40:37 Ready to write response ...
	2023/07/17 21:40:37 Ready to marshal response ...
	2023/07/17 21:40:37 Ready to write response ...
	2023/07/17 21:40:37 Ready to marshal response ...
	2023/07/17 21:40:37 Ready to write response ...
	2023/07/17 21:40:44 Ready to marshal response ...
	2023/07/17 21:40:44 Ready to write response ...
	2023/07/17 21:40:46 Ready to marshal response ...
	2023/07/17 21:40:46 Ready to write response ...
	2023/07/17 21:40:46 Ready to marshal response ...
	2023/07/17 21:40:46 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:40:57 up 2 min,  0 users,  load average: 2.34, 1.37, 0.54
	Linux addons-061866 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8dd823dc58ee1ae2b6cab8a7f0c993c164e3d12837e154899650785caebc1b26] <==
	* I0717 21:39:20.682215       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:39:20.726319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:39:20.726389       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:39:20.843055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:39:20.843152       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:39:20.967780       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:39:20.967852       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:39:21.229447       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0717 21:39:22.737415       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs=map[IPv4:10.105.40.210]
	I0717 21:39:22.796906       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs=map[IPv4:10.110.162.208]
	I0717 21:39:22.891001       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0717 21:39:24.000656       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.99.153.112]
	I0717 21:39:24.038688       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0717 21:39:24.506840       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.109.31.132]
	I0717 21:39:27.801165       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.96.170.75]
	E0717 21:39:41.326145       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.103.71:443: connect: connection refused
	I0717 21:39:41.327253       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.103.71:443: connect: connection refused
	I0717 21:39:41.327266       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0717 21:39:41.329447       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.103.71:443: connect: connection refused
	E0717 21:39:41.332816       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.103.71:443: connect: connection refused
	E0717 21:39:41.354601       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.103.71:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.103.71:443: connect: connection refused
	I0717 21:39:41.722831       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 21:39:56.192358       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 21:40:37.498999       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.97.176.114]
	I0717 21:40:56.193742       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [a78fef0dbf515a5dabf93afa264da2ac9ecc875f1ab8b53bd2465c22917faf10] <==
	* I0717 21:40:09.464775       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:09.475842       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:09.485319       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0717 21:40:09.485960       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:09.498982       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:09.502040       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0717 21:40:09.519703       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0717 21:40:09.528605       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0717 21:40:09.529589       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0717 21:40:09.558957       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0717 21:40:09.571965       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0717 21:40:09.583738       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0717 21:40:09.592707       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0717 21:40:09.593044       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0717 21:40:37.406468       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0717 21:40:37.530131       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-66f6498c69 to 1"
	I0717 21:40:37.543699       1 event.go:307] "Event occurred" object="headlamp/headlamp-66f6498c69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-66f6498c69-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0717 21:40:37.552593       1 replica_set.go:544] sync "headlamp/headlamp-66f6498c69" failed with pods "headlamp-66f6498c69-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0717 21:40:37.602207       1 event.go:307] "Event occurred" object="headlamp/headlamp-66f6498c69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-66f6498c69-dqqgl"
	I0717 21:40:39.023539       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0717 21:40:39.030091       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:39.089431       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0717 21:40:39.090893       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0717 21:40:41.103820       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0717 21:40:43.937953       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [a2dd7da36eb51d21dc2aea8d4ddd870a6930ce8a2168392f5cf19ab7843f8b97] <==
	* I0717 21:39:13.296143       1 node.go:141] Successfully retrieved node IP: 192.168.39.55
	I0717 21:39:13.296278       1 server_others.go:110] "Detected node IP" address="192.168.39.55"
	I0717 21:39:13.296368       1 server_others.go:554] "Using iptables proxy"
	I0717 21:39:13.750385       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 21:39:13.750436       1 server_others.go:192] "Using iptables Proxier"
	I0717 21:39:13.750466       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 21:39:13.751136       1 server.go:658] "Version info" version="v1.27.3"
	I0717 21:39:13.751176       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 21:39:13.769743       1 config.go:188] "Starting service config controller"
	I0717 21:39:13.769790       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 21:39:13.769812       1 config.go:97] "Starting endpoint slice config controller"
	I0717 21:39:13.769815       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 21:39:13.774573       1 config.go:315] "Starting node config controller"
	I0717 21:39:13.774586       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 21:39:13.870368       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 21:39:13.870578       1 shared_informer.go:318] Caches are synced for service config
	I0717 21:39:13.874933       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [12c61943f7013b1064e2b4fa58d23ebd2ae733b2949d65838211e806ce8e4bff] <==
	* W0717 21:38:56.334141       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 21:38:56.334177       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 21:38:56.334187       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:38:56.334195       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:38:56.335568       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 21:38:56.337856       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:38:56.337865       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:38:56.338090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:38:56.339827       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 21:38:56.340208       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:38:56.340360       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:38:56.340868       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:38:56.341188       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:38:56.341466       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 21:38:57.268997       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:38:57.269244       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:38:57.342726       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:38:57.342789       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:38:57.349823       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:38:57.349874       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:38:57.426366       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:38:57.426390       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 21:38:57.467389       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:38:57.467440       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 21:39:00.309402       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 21:38:23 UTC, ends at Mon 2023-07-17 21:40:58 UTC. --
	Jul 17 21:40:55 addons-061866 kubelet[1201]: I0717 21:40:55.285112    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p2khs\" (UniqueName: \"kubernetes.io/projected/3ed9b857-a404-48d7-9e89-985a9ee4f3f8-kube-api-access-p2khs\") on node \"addons-061866\" DevicePath \"\""
	Jul 17 21:40:55 addons-061866 kubelet[1201]: I0717 21:40:55.285245    1201 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3ed9b857-a404-48d7-9e89-985a9ee4f3f8-gcp-creds\") on node \"addons-061866\" DevicePath \"\""
	Jul 17 21:40:55 addons-061866 kubelet[1201]: I0717 21:40:55.871581    1201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88325e3d24eee80ca3f2b7a81fd3b955bb0fce0bfa6db5c89a6c0292da0a9a8e"
	Jul 17 21:40:55 addons-061866 kubelet[1201]: I0717 21:40:55.919058    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3ed9b857-a404-48d7-9e89-985a9ee4f3f8 path="/var/lib/kubelet/pods/3ed9b857-a404-48d7-9e89-985a9ee4f3f8/volumes"
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.496405    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw69d\" (UniqueName: \"kubernetes.io/projected/2da542cc-0709-40bc-b84b-896cc24ab425-kube-api-access-mw69d\") pod \"2da542cc-0709-40bc-b84b-896cc24ab425\" (UID: \"2da542cc-0709-40bc-b84b-896cc24ab425\") "
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.504831    1201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da542cc-0709-40bc-b84b-896cc24ab425-kube-api-access-mw69d" (OuterVolumeSpecName: "kube-api-access-mw69d") pod "2da542cc-0709-40bc-b84b-896cc24ab425" (UID: "2da542cc-0709-40bc-b84b-896cc24ab425"). InnerVolumeSpecName "kube-api-access-mw69d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.597327    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9xsh\" (UniqueName: \"kubernetes.io/projected/a00c0e48-1f03-43df-af79-4e00a7720ba7-kube-api-access-f9xsh\") pod \"a00c0e48-1f03-43df-af79-4e00a7720ba7\" (UID: \"a00c0e48-1f03-43df-af79-4e00a7720ba7\") "
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.597469    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mw69d\" (UniqueName: \"kubernetes.io/projected/2da542cc-0709-40bc-b84b-896cc24ab425-kube-api-access-mw69d\") on node \"addons-061866\" DevicePath \"\""
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.600011    1201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00c0e48-1f03-43df-af79-4e00a7720ba7-kube-api-access-f9xsh" (OuterVolumeSpecName: "kube-api-access-f9xsh") pod "a00c0e48-1f03-43df-af79-4e00a7720ba7" (UID: "a00c0e48-1f03-43df-af79-4e00a7720ba7"). InnerVolumeSpecName "kube-api-access-f9xsh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.698547    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f9xsh\" (UniqueName: \"kubernetes.io/projected/a00c0e48-1f03-43df-af79-4e00a7720ba7-kube-api-access-f9xsh\") on node \"addons-061866\" DevicePath \"\""
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.933889    1201 scope.go:115] "RemoveContainer" containerID="596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283"
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.986837    1201 scope.go:115] "RemoveContainer" containerID="596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283"
	Jul 17 21:40:56 addons-061866 kubelet[1201]: E0717 21:40:56.993548    1201 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\": not found" containerID="596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283"
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.993788    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283} err="failed to get container status \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\": rpc error: code = NotFound desc = an error occurred when try to find container \"596f19fc00278eb585eaf69656cff34bb679b15cfbb24b029c9a5e201aa90283\": not found"
	Jul 17 21:40:56 addons-061866 kubelet[1201]: I0717 21:40:56.994009    1201 scope.go:115] "RemoveContainer" containerID="f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.039893    1201 scope.go:115] "RemoveContainer" containerID="f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: E0717 21:40:57.042684    1201 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\": not found" containerID="f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.042749    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f} err="failed to get container status \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f58076b00befa335d7ee11dece6491195ef94c60b51ccfe5ee1459861c4ccf3f\": not found"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.207262    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e-kube-api-access-f2s2c\") pod \"d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e\" (UID: \"d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e\") "
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.223668    1201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e-kube-api-access-f2s2c" (OuterVolumeSpecName: "kube-api-access-f2s2c") pod "d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e" (UID: "d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e"). InnerVolumeSpecName "kube-api-access-f2s2c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.308872    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e-kube-api-access-f2s2c\") on node \"addons-061866\" DevicePath \"\""
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.897870    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2da542cc-0709-40bc-b84b-896cc24ab425 path="/var/lib/kubelet/pods/2da542cc-0709-40bc-b84b-896cc24ab425/volumes"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.898565    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a00c0e48-1f03-43df-af79-4e00a7720ba7 path="/var/lib/kubelet/pods/a00c0e48-1f03-43df-af79-4e00a7720ba7/volumes"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.898974    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e path="/var/lib/kubelet/pods/d792768a-5d74-4dcf-ab50-b7fbd3a3fb5e/volumes"
	Jul 17 21:40:57 addons-061866 kubelet[1201]: I0717 21:40:57.968462    1201 scope.go:115] "RemoveContainer" containerID="3715ab16d3d15db3c07b9df1a8fe9234c20ace28442871db1e3988a11494a21f"
	
	* 
	* ==> storage-provisioner [9b55249ffe6ce15b38ca859e988eff83ef2f8a082546a4e63df70dc3c63fc45f] <==
	* I0717 21:39:24.094013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:39:24.567002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:39:24.567052       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:39:24.649855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:39:24.650827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae76152b-1b2b-458c-92f3-782ef804c327", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-061866_5dc69d31-0054-4d7a-9bf3-9dca43eebecb became leader
	I0717 21:39:24.660399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-061866_5dc69d31-0054-4d7a-9bf3-9dca43eebecb!
	I0717 21:39:25.066190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-061866_5dc69d31-0054-4d7a-9bf3-9dca43eebecb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-061866 -n addons-061866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-061866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-mvmbn ingress-nginx-admission-patch-xdpbc tiller-deploy-6847666dc-gtmd8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-061866 describe pod nginx ingress-nginx-admission-create-mvmbn ingress-nginx-admission-patch-xdpbc tiller-deploy-6847666dc-gtmd8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-061866 describe pod nginx ingress-nginx-admission-create-mvmbn ingress-nginx-admission-patch-xdpbc tiller-deploy-6847666dc-gtmd8: exit status 1 (101.386644ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-061866/192.168.39.55
	Start Time:       Mon, 17 Jul 2023 21:40:58 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8sktz (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-8sktz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/nginx to addons-061866

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mvmbn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xdpbc" not found
	Error from server (NotFound): pods "tiller-deploy-6847666dc-gtmd8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-061866 describe pod nginx ingress-nginx-admission-create-mvmbn ingress-nginx-admission-patch-xdpbc tiller-deploy-6847666dc-gtmd8: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (177.555106ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.18s)

                                                
                                    

Test pass (270/303)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.28
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.27.3/json-events 8.27
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.54
20 TestOffline 121.23
22 TestAddons/Setup 144.98
24 TestAddons/parallel/Registry 19.96
25 TestAddons/parallel/Ingress 23.03
26 TestAddons/parallel/InspektorGadget 10.94
28 TestAddons/parallel/HelmTiller 16.15
30 TestAddons/parallel/CSI 63.03
31 TestAddons/parallel/Headlamp 14.49
32 TestAddons/parallel/CloudSpanner 5.74
35 TestAddons/serial/GCPAuth/Namespaces 0.14
36 TestAddons/StoppedEnableDisable 92.06
37 TestCertOptions 73.27
38 TestCertExpiration 265.07
40 TestForceSystemdFlag 53.29
41 TestForceSystemdEnv 51.42
43 TestKVMDriverInstallOrUpdate 2.89
47 TestErrorSpam/setup 53.44
48 TestErrorSpam/start 0.33
49 TestErrorSpam/status 0.75
50 TestErrorSpam/pause 1.46
51 TestErrorSpam/unpause 1.61
52 TestErrorSpam/stop 3.21
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 65.76
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 6.24
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
64 TestFunctional/serial/CacheCmd/cache/add_local 2.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 39.49
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.35
75 TestFunctional/serial/LogsFileCmd 1.33
76 TestFunctional/serial/InvalidService 4.21
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 19.61
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 1.3
86 TestFunctional/parallel/ServiceCmdConnect 7.84
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 38.87
90 TestFunctional/parallel/SSHCmd 0.46
91 TestFunctional/parallel/CpCmd 0.95
92 TestFunctional/parallel/MySQL 32.2
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.42
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
103 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
104 TestFunctional/parallel/Version/short 0.15
105 TestFunctional/parallel/Version/components 1.06
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
110 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
111 TestFunctional/parallel/ImageCommands/Setup 1.3
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.31
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.36
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.34
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.45
123 TestFunctional/parallel/ServiceCmd/List 0.32
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
126 TestFunctional/parallel/ServiceCmd/Format 0.31
127 TestFunctional/parallel/ServiceCmd/URL 0.36
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
135 TestFunctional/parallel/ProfileCmd/profile_list 0.3
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
137 TestFunctional/parallel/MountCmd/any-port 20.82
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.5
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.5
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
142 TestFunctional/parallel/MountCmd/specific-port 1.97
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 83.24
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.51
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.63
157 TestJSONOutput/start/Command 74.01
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.63
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.59
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 7.09
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
185 TestMainNoArgs 0.04
186 TestMinikubeProfile 101.62
189 TestMountStart/serial/StartWithMountFirst 27.71
190 TestMountStart/serial/VerifyMountFirst 0.38
191 TestMountStart/serial/StartWithMountSecond 27.72
192 TestMountStart/serial/VerifyMountSecond 0.38
193 TestMountStart/serial/DeleteFirst 0.65
194 TestMountStart/serial/VerifyMountPostDelete 0.38
195 TestMountStart/serial/Stop 1.16
196 TestMountStart/serial/RestartStopped 23.6
197 TestMountStart/serial/VerifyMountPostStop 0.38
200 TestMultiNode/serial/FreshStart2Nodes 108.78
201 TestMultiNode/serial/DeployApp2Nodes 4.94
202 TestMultiNode/serial/PingHostFrom2Pods 0.82
203 TestMultiNode/serial/AddNode 41.54
204 TestMultiNode/serial/ProfileList 0.2
205 TestMultiNode/serial/CopyFile 7.18
206 TestMultiNode/serial/StopNode 2.22
207 TestMultiNode/serial/StartAfterStop 27.4
208 TestMultiNode/serial/RestartKeepsNodes 324.43
209 TestMultiNode/serial/DeleteNode 1.74
210 TestMultiNode/serial/StopMultiNode 183.62
211 TestMultiNode/serial/RestartMultiNode 92.64
212 TestMultiNode/serial/ValidateNameConflict 49.58
217 TestPreload 240.2
219 TestScheduledStopUnix 120.01
223 TestRunningBinaryUpgrade 260.28
225 TestKubernetesUpgrade 165.6
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
229 TestNoKubernetes/serial/StartWithK8s 104.32
237 TestNetworkPlugins/group/false 3.41
248 TestNoKubernetes/serial/StartWithStopK8s 46.68
249 TestNoKubernetes/serial/Start 30.46
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
251 TestNoKubernetes/serial/ProfileList 6.73
252 TestNoKubernetes/serial/Stop 1.25
253 TestNoKubernetes/serial/StartNoArgs 37.61
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
255 TestStoppedBinaryUpgrade/Setup 0.42
256 TestStoppedBinaryUpgrade/Upgrade 176.99
258 TestPause/serial/Start 128.02
259 TestNetworkPlugins/group/auto/Start 105.67
260 TestNetworkPlugins/group/kindnet/Start 71.75
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
262 TestPause/serial/SecondStartNoReconfiguration 9.41
263 TestNetworkPlugins/group/calico/Start 103.11
264 TestPause/serial/Pause 1.22
265 TestPause/serial/VerifyStatus 0.27
266 TestPause/serial/Unpause 0.8
267 TestPause/serial/PauseAgain 0.8
268 TestPause/serial/DeletePaused 1.05
269 TestPause/serial/VerifyDeletedResources 0.39
270 TestNetworkPlugins/group/custom-flannel/Start 114.84
271 TestNetworkPlugins/group/auto/KubeletFlags 0.21
272 TestNetworkPlugins/group/auto/NetCatPod 10.37
273 TestNetworkPlugins/group/auto/DNS 0.21
274 TestNetworkPlugins/group/auto/Localhost 0.18
275 TestNetworkPlugins/group/auto/HairPin 0.17
276 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
278 TestNetworkPlugins/group/kindnet/NetCatPod 11.57
279 TestNetworkPlugins/group/enable-default-cni/Start 77.36
280 TestNetworkPlugins/group/kindnet/DNS 0.25
281 TestNetworkPlugins/group/kindnet/Localhost 0.21
282 TestNetworkPlugins/group/kindnet/HairPin 0.24
283 TestNetworkPlugins/group/flannel/Start 95.81
284 TestNetworkPlugins/group/calico/ControllerPod 5.03
285 TestNetworkPlugins/group/calico/KubeletFlags 0.24
286 TestNetworkPlugins/group/calico/NetCatPod 11.44
287 TestNetworkPlugins/group/calico/DNS 0.24
288 TestNetworkPlugins/group/calico/Localhost 0.24
289 TestNetworkPlugins/group/calico/HairPin 0.3
290 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
291 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.52
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.57
294 TestNetworkPlugins/group/bridge/Start 108.8
295 TestNetworkPlugins/group/custom-flannel/DNS 0.23
296 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
297 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
298 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
299 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
300 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
302 TestStartStop/group/old-k8s-version/serial/FirstStart 138.56
304 TestStartStop/group/no-preload/serial/FirstStart 116.65
305 TestNetworkPlugins/group/flannel/ControllerPod 5.1
306 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
307 TestNetworkPlugins/group/flannel/NetCatPod 9.49
308 TestNetworkPlugins/group/flannel/DNS 0.18
309 TestNetworkPlugins/group/flannel/Localhost 0.16
310 TestNetworkPlugins/group/flannel/HairPin 0.15
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.53
313 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
314 TestNetworkPlugins/group/bridge/NetCatPod 13.43
315 TestNetworkPlugins/group/bridge/DNS 0.19
316 TestNetworkPlugins/group/bridge/Localhost 0.2
317 TestNetworkPlugins/group/bridge/HairPin 0.17
319 TestStartStop/group/newest-cni/serial/FirstStart 61.42
320 TestStartStop/group/no-preload/serial/DeployApp 9.52
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.55
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.6
323 TestStartStop/group/no-preload/serial/Stop 91.94
324 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.32
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
328 TestStartStop/group/old-k8s-version/serial/Stop 92.56
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.39
331 TestStartStop/group/newest-cni/serial/Stop 3.09
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
333 TestStartStop/group/newest-cni/serial/SecondStart 45.12
334 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/no-preload/serial/SecondStart 604.94
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/newest-cni/serial/Pause 2.55
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 615.19
343 TestStartStop/group/embed-certs/serial/FirstStart 131.53
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
345 TestStartStop/group/old-k8s-version/serial/SecondStart 112.67
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.02
347 TestStartStop/group/embed-certs/serial/DeployApp 8.51
348 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
350 TestStartStop/group/embed-certs/serial/Stop 92.19
351 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
352 TestStartStop/group/old-k8s-version/serial/Pause 2.56
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
354 TestStartStop/group/embed-certs/serial/SecondStart 307.54
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
358 TestStartStop/group/embed-certs/serial/Pause 2.59
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
362 TestStartStop/group/no-preload/serial/Pause 2.47
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.45
x
+
TestDownloadOnly/v1.16.0/json-events (8.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-057626 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-057626 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.281372202s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-057626
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-057626: exit status 85 (53.710834ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:37 UTC |          |
	|         | -p download-only-057626        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:37:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:37:53.605728   13809 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:37:53.605889   13809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:37:53.605900   13809 out.go:309] Setting ErrFile to fd 2...
	I0717 21:37:53.605907   13809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:37:53.606122   13809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	W0717 21:37:53.606282   13809 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-6542/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-6542/.minikube/config/config.json: no such file or directory
	I0717 21:37:53.606849   13809 out.go:303] Setting JSON to true
	I0717 21:37:53.607646   13809 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1226,"bootTime":1689628648,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:37:53.607706   13809 start.go:138] virtualization: kvm guest
	I0717 21:37:53.610377   13809 out.go:97] [download-only-057626] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:37:53.612514   13809 out.go:169] MINIKUBE_LOCATION=16899
	W0717 21:37:53.610470   13809 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 21:37:53.610501   13809 notify.go:220] Checking for updates...
	I0717 21:37:53.615448   13809 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:37:53.616986   13809 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:37:53.618348   13809 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:37:53.619609   13809 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:37:53.622193   13809 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:37:53.622411   13809 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:37:53.725516   13809 out.go:97] Using the kvm2 driver based on user configuration
	I0717 21:37:53.725537   13809 start.go:298] selected driver: kvm2
	I0717 21:37:53.725542   13809 start.go:880] validating driver "kvm2" against <nil>
	I0717 21:37:53.725829   13809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:37:53.725938   13809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-6542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:37:53.740250   13809 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:37:53.740292   13809 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:37:53.740688   13809 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 21:37:53.740830   13809 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:37:53.740855   13809 cni.go:84] Creating CNI manager for ""
	I0717 21:37:53.740864   13809 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0717 21:37:53.740870   13809 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 21:37:53.740876   13809 start_flags.go:319] config:
	{Name:download-only-057626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-057626 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:37:53.741042   13809 iso.go:125] acquiring lock: {Name:mk2c3e3c0e4d92ba8dafc265e87aade8da278690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:37:53.743136   13809 out.go:97] Downloading VM boot image ...
	I0717 21:37:53.743170   13809 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16899-6542/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 21:37:55.880405   13809 out.go:97] Starting control plane node download-only-057626 in cluster download-only-057626
	I0717 21:37:55.880429   13809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 21:37:55.901992   13809 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0717 21:37:55.902016   13809 cache.go:57] Caching tarball of preloaded images
	I0717 21:37:55.902163   13809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 21:37:55.903932   13809 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 21:37:55.903947   13809 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:37:55.929269   13809 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-057626"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (8.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-057626 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-057626 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.270999682s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (8.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-057626
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-057626: exit status 85 (56.838493ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:37 UTC |          |
	|         | -p download-only-057626        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-057626 | jenkins | v1.31.0 | 17 Jul 23 21:38 UTC |          |
	|         | -p download-only-057626        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:38:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:38:01.943183   13868 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:38:01.943292   13868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:01.943301   13868 out.go:309] Setting ErrFile to fd 2...
	I0717 21:38:01.943305   13868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:38:01.943497   13868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	W0717 21:38:01.943615   13868 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-6542/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-6542/.minikube/config/config.json: no such file or directory
	I0717 21:38:01.944001   13868 out.go:303] Setting JSON to true
	I0717 21:38:01.944796   13868 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1234,"bootTime":1689628648,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:38:01.944851   13868 start.go:138] virtualization: kvm guest
	I0717 21:38:01.946989   13868 out.go:97] [download-only-057626] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:38:01.948489   13868 out.go:169] MINIKUBE_LOCATION=16899
	I0717 21:38:01.947132   13868 notify.go:220] Checking for updates...
	I0717 21:38:01.952017   13868 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:38:01.953624   13868 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:38:01.955088   13868 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:38:01.956470   13868 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:38:01.958983   13868 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:38:01.959381   13868 config.go:182] Loaded profile config "download-only-057626": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0717 21:38:01.959420   13868 start.go:788] api.Load failed for download-only-057626: filestore "download-only-057626": Docker machine "download-only-057626" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:38:01.959500   13868 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 21:38:01.959527   13868 start.go:788] api.Load failed for download-only-057626: filestore "download-only-057626": Docker machine "download-only-057626" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:38:01.991360   13868 out.go:97] Using the kvm2 driver based on existing profile
	I0717 21:38:01.991380   13868 start.go:298] selected driver: kvm2
	I0717 21:38:01.991384   13868 start.go:880] validating driver "kvm2" against &{Name:download-only-057626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-on
ly-057626 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:01.991721   13868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:38:01.991811   13868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-6542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:38:02.006115   13868 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:38:02.006757   13868 cni.go:84] Creating CNI manager for ""
	I0717 21:38:02.006770   13868 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0717 21:38:02.006777   13868 start_flags.go:319] config:
	{Name:download-only-057626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-057626 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0717 21:38:02.006930   13868 iso.go:125] acquiring lock: {Name:mk2c3e3c0e4d92ba8dafc265e87aade8da278690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:38:02.008773   13868 out.go:97] Starting control plane node download-only-057626 in cluster download-only-057626
	I0717 21:38:02.008791   13868 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:02.037037   13868 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	I0717 21:38:02.037068   13868 cache.go:57] Caching tarball of preloaded images
	I0717 21:38:02.037211   13868 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 21:38:02.039260   13868 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 21:38:02.039281   13868 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4 ...
	I0717 21:38:02.068476   13868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1f83873e0026e1a370942079b65e1960 -> /home/jenkins/minikube-integration/16899-6542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-057626"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-057626
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-260672 --alsologtostderr --binary-mirror http://127.0.0.1:43749 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-260672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-260672
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (121.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-134225 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-134225 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m0.201856545s)
helpers_test.go:175: Cleaning up "offline-containerd-134225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-134225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-134225: (1.028332839s)
--- PASS: TestOffline (121.23s)

                                                
                                    
x
+
TestAddons/Setup (144.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-061866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-061866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.977411695s)
--- PASS: TestAddons/Setup (144.98s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 30.65528ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-w6gzd" [2da542cc-0709-40bc-b84b-896cc24ab425] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01460365s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-stqnf" [a00c0e48-1f03-43df-af79-4e00a7720ba7] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015068632s
addons_test.go:316: (dbg) Run:  kubectl --context addons-061866 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-061866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-061866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.889380966s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 ip
2023/07/17 21:40:55 [DEBUG] GET http://192.168.39.55:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-061866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-061866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-061866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [276da1cf-7b14-4496-9b7c-ce7d04c3e14f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [276da1cf-7b14-4496-9b7c-ce7d04c3e14f] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.021339105s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-061866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.55
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-061866 addons disable ingress-dns --alsologtostderr -v=1: (1.886493512s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-061866 addons disable ingress --alsologtostderr -v=1: (7.907651313s)
--- PASS: TestAddons/parallel/Ingress (23.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7prjj" [e8d9dd5b-003f-4c09-988e-c1904b7f1f88] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009182407s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-061866
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-061866: (5.92990456s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.15s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.808613ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-gtmd8" [a569b132-a101-4fd9-b551-518fa3e6b80e] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010243821s
addons_test.go:449: (dbg) Run:  kubectl --context addons-061866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-061866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.345589706s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 36.429998ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.230353677s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0b81136e-9f97-48a9-a3f0-87ef071cfe6c] Pending
helpers_test.go:344: "task-pv-pod" [0b81136e-9f97-48a9-a3f0-87ef071cfe6c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0b81136e-9f97-48a9-a3f0-87ef071cfe6c] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012826609s
addons_test.go:560: (dbg) Run:  kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-061866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-061866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-061866 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-061866 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-061866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-061866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f03a7858-f035-415c-a518-43a7bf3c5a2b] Pending
helpers_test.go:344: "task-pv-pod-restore" [f03a7858-f035-415c-a518-43a7bf3c5a2b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f03a7858-f035-415c-a518-43a7bf3c5a2b] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.022330707s
addons_test.go:602: (dbg) Run:  kubectl --context addons-061866 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-061866 delete pod task-pv-pod-restore: (1.333505706s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-061866 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-061866 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-061866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.763561979s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-061866 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-061866 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-061866 --alsologtostderr -v=1: (1.48055129s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-dqqgl" [f2f2ce14-3011-414b-9166-5b9b5ec17cc4] Pending
helpers_test.go:344: "headlamp-66f6498c69-dqqgl" [f2f2ce14-3011-414b-9166-5b9b5ec17cc4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-dqqgl" [f2f2ce14-3011-414b-9166-5b9b5ec17cc4] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.012155156s
--- PASS: TestAddons/parallel/Headlamp (14.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-d4pmr" [f2676f6c-7857-4842-a461-0a9cd5634a94] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014525054s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-061866
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-061866 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-061866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-061866
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-061866: (1m31.79786324s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-061866
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-061866
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-061866
--- PASS: TestAddons/StoppedEnableDisable (92.06s)

                                                
                                    
x
+
TestCertOptions (73.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-985453 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0717 22:18:39.165288   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-985453 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m11.732775421s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-985453 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-985453 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-985453 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-985453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-985453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-985453: (1.033719714s)
--- PASS: TestCertOptions (73.27s)

                                                
                                    
x
+
TestCertExpiration (265.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-608059 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0717 22:16:37.348706   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-608059 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (59.887710696s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-608059 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
E0717 22:20:36.116814   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-608059 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (23.462489897s)
helpers_test.go:175: Cleaning up "cert-expiration-608059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-608059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-608059: (1.723405416s)
--- PASS: TestCertExpiration (265.07s)

                                                
                                    
x
+
TestForceSystemdFlag (53.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-282525 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-282525 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (51.165996691s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-282525 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-282525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-282525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-282525: (1.90557481s)
--- PASS: TestForceSystemdFlag (53.29s)

                                                
                                    
x
+
TestForceSystemdEnv (51.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-272854 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-272854 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (50.106028622s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-272854 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-272854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-272854
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-272854: (1.119889111s)
--- PASS: TestForceSystemdEnv (51.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.89s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.89s)

                                                
                                    
x
+
TestErrorSpam/setup (53.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-350832 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-350832 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-350832 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-350832 --driver=kvm2  --container-runtime=containerd: (53.441843161s)
--- PASS: TestErrorSpam/setup (53.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (3.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 stop: (3.075805539s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350832 --log_dir /tmp/nospam-350832 stop
--- PASS: TestErrorSpam/stop (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-6542/.minikube/files/etc/test/nested/copy/13797/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0717 21:45:36.115915   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.121685   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.131994   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.152301   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-982689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m5.755711705s)
--- PASS: TestFunctional/serial/StartWithProxy (65.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --alsologtostderr -v=8
E0717 21:45:36.192807   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.273324   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.434392   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:36.754903   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:37.396061   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:38.676650   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:45:41.237567   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-982689 --alsologtostderr -v=8: (6.238506692s)
functional_test.go:659: soft start took 6.239188529s for "functional-982689" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-982689 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:3.1: (1.139887735s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:3.3: (1.176145333s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 cache add registry.k8s.io/pause:latest: (1.147384474s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-982689 /tmp/TestFunctionalserialCacheCmdcacheadd_local2444110728/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache add minikube-local-cache-test:functional-982689
E0717 21:45:46.358009   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 cache add minikube-local-cache-test:functional-982689: (1.727337506s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache delete minikube-local-cache-test:functional-982689
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-982689
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.680688ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 cache reload: (1.38345486s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 kubectl -- --context functional-982689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-982689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 21:45:56.598438   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 21:46:17.079461   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-982689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.493064951s)
functional_test.go:757: restart took 39.493179631s for "functional-982689" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-982689 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 logs: (1.35184467s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 logs --file /tmp/TestFunctionalserialLogsFileCmd2313524299/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 logs --file /tmp/TestFunctionalserialLogsFileCmd2313524299/001/logs.txt: (1.330905237s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-982689 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-982689
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-982689: exit status 115 (286.328311ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.82:31873 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-982689 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 config get cpus: exit status 14 (54.713713ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 config get cpus: exit status 14 (45.664879ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982689 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982689 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20500: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-982689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (139.313176ms)

                                                
                                                
-- stdout --
	* [functional-982689] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:46:59.770804   20218 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:46:59.770931   20218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:46:59.770939   20218 out.go:309] Setting ErrFile to fd 2...
	I0717 21:46:59.770945   20218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:46:59.771201   20218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 21:46:59.771719   20218 out.go:303] Setting JSON to false
	I0717 21:46:59.772580   20218 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1772,"bootTime":1689628648,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:46:59.772641   20218 start.go:138] virtualization: kvm guest
	I0717 21:46:59.774971   20218 out.go:177] * [functional-982689] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:46:59.777051   20218 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:46:59.777017   20218 notify.go:220] Checking for updates...
	I0717 21:46:59.778656   20218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:46:59.780132   20218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:46:59.781494   20218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:46:59.782902   20218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:46:59.784572   20218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:46:59.787332   20218 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:46:59.788035   20218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:46:59.788097   20218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:46:59.802347   20218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0717 21:46:59.802771   20218 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:46:59.803344   20218 main.go:141] libmachine: Using API Version  1
	I0717 21:46:59.803369   20218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:46:59.803749   20218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:46:59.803932   20218 main.go:141] libmachine: (functional-982689) Calling .DriverName
	I0717 21:46:59.804160   20218 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:46:59.804503   20218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:46:59.804540   20218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:46:59.819073   20218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0717 21:46:59.819452   20218 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:46:59.819910   20218 main.go:141] libmachine: Using API Version  1
	I0717 21:46:59.819936   20218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:46:59.820262   20218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:46:59.820447   20218 main.go:141] libmachine: (functional-982689) Calling .DriverName
	I0717 21:46:59.860919   20218 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 21:46:59.862318   20218 start.go:298] selected driver: kvm2
	I0717 21:46:59.862334   20218 start.go:880] validating driver "kvm2" against &{Name:functional-982689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-982
689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.82 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:46:59.862440   20218 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:46:59.867341   20218 out.go:177] 
	W0717 21:46:59.868937   20218 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 21:46:59.870410   20218 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-982689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (133.326671ms)

                                                
                                                
-- stdout --
	* [functional-982689] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:47:00.052671   20273 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:47:00.052835   20273 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:00.052848   20273 out.go:309] Setting ErrFile to fd 2...
	I0717 21:47:00.052855   20273 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:47:00.053120   20273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 21:47:00.053657   20273 out.go:303] Setting JSON to false
	I0717 21:47:00.054494   20273 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1772,"bootTime":1689628648,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:47:00.054549   20273 start.go:138] virtualization: kvm guest
	I0717 21:47:00.056881   20273 out.go:177] * [functional-982689] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	I0717 21:47:00.058562   20273 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:47:00.060174   20273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:47:00.058591   20273 notify.go:220] Checking for updates...
	I0717 21:47:00.061710   20273 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 21:47:00.063178   20273 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 21:47:00.064630   20273 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:47:00.066087   20273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:47:00.069155   20273 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:47:00.069570   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:47:00.069614   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:47:00.084704   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
	I0717 21:47:00.085174   20273 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:47:00.085826   20273 main.go:141] libmachine: Using API Version  1
	I0717 21:47:00.085853   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:47:00.086195   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:47:00.086433   20273 main.go:141] libmachine: (functional-982689) Calling .DriverName
	I0717 21:47:00.086689   20273 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:47:00.086958   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:47:00.086991   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:47:00.101283   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43031
	I0717 21:47:00.101973   20273 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:47:00.103475   20273 main.go:141] libmachine: Using API Version  1
	I0717 21:47:00.103502   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:47:00.103841   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:47:00.104021   20273 main.go:141] libmachine: (functional-982689) Calling .DriverName
	I0717 21:47:00.137709   20273 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 21:47:00.139137   20273 start.go:298] selected driver: kvm2
	I0717 21:47:00.139148   20273 start.go:880] validating driver "kvm2" against &{Name:functional-982689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-982
689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.82 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:47:00.139272   20273 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:47:00.141700   20273 out.go:177] 
	W0717 21:47:00.143121   20273 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 21:47:00.144601   20273 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-982689 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-982689 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-b9rsn" [b8ffdcff-6e18-4093-8e25-f075cee3712d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-b9rsn" [b8ffdcff-6e18-4093-8e25-f075cee3712d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.017560337s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.82:31203
functional_test.go:1674: http://192.168.39.82:31203: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-b9rsn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.82:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.82:31203
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a8f35fe6-90aa-4e75-a7bc-e1121f561f7d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.017090714s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-982689 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-982689 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-982689 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-982689 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-982689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee65b5bb-891a-46f0-ada9-d295044af53f] Pending
helpers_test.go:344: "sp-pod" [ee65b5bb-891a-46f0-ada9-d295044af53f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee65b5bb-891a-46f0-ada9-d295044af53f] Running
E0717 21:46:58.039654   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.00965402s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-982689 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-982689 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-982689 delete -f testdata/storage-provisioner/pod.yaml: (2.076590155s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-982689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [22dd9635-4282-4800-801b-447622d083f8] Pending
helpers_test.go:344: "sp-pod" [22dd9635-4282-4800-801b-447622d083f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [22dd9635-4282-4800-801b-447622d083f8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.014449361s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-982689 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh -n functional-982689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 cp functional-982689:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3585651994/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh -n functional-982689 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-982689 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-w298g" [f57041d0-e199-44dc-8ec1-d76042ab050a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-w298g" [f57041d0-e199-44dc-8ec1-d76042ab050a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.009270942s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;": exit status 1 (218.394213ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;": exit status 1 (177.584351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;": exit status 1 (322.439365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;": exit status 1 (233.963419ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-982689 exec mysql-7db894d786-w298g -- mysql -ppassword -e "show databases;"
2023/07/17 21:47:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (32.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13797/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /etc/test/nested/copy/13797/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13797.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /etc/ssl/certs/13797.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13797.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /usr/share/ca-certificates/13797.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/137972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /etc/ssl/certs/137972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/137972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /usr/share/ca-certificates/137972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-982689 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "sudo systemctl is-active docker": exit status 1 (217.109991ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "sudo systemctl is-active crio": exit status 1 (260.203417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-982689 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-982689 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-7rfrh" [093167c4-571e-4bfb-9da6-9dad30146de3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-7rfrh" [093167c4-571e-4bfb-9da6-9dad30146de3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.031006781s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 version -o=json --components: (1.064632703s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982689 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-982689
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-982689
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982689 image ls --format short --alsologtostderr:
I0717 21:47:17.808482   21121 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:17.808581   21121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:17.808593   21121 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:17.808599   21121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:17.808811   21121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
I0717 21:47:17.809405   21121 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:17.809517   21121 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:17.809882   21121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:17.809934   21121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:17.825029   21121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
I0717 21:47:17.825425   21121 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:17.826035   21121 main.go:141] libmachine: Using API Version  1
I0717 21:47:17.826063   21121 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:17.826459   21121 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:17.826663   21121 main.go:141] libmachine: (functional-982689) Calling .GetState
I0717 21:47:17.828503   21121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:17.828539   21121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:17.843510   21121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
I0717 21:47:17.843925   21121 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:17.844437   21121 main.go:141] libmachine: Using API Version  1
I0717 21:47:17.844466   21121 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:17.844847   21121 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:17.845077   21121 main.go:141] libmachine: (functional-982689) Calling .DriverName
I0717 21:47:17.845296   21121 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:17.845318   21121 main.go:141] libmachine: (functional-982689) Calling .GetSSHHostname
I0717 21:47:17.848270   21121 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:17.848687   21121 main.go:141] libmachine: (functional-982689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b9:9c", ip: ""} in network mk-functional-982689: {Iface:virbr1 ExpiryTime:2023-07-17 22:44:46 +0000 UTC Type:0 Mac:52:54:00:58:b9:9c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-982689 Clientid:01:52:54:00:58:b9:9c}
I0717 21:47:17.848716   21121 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined IP address 192.168.39.82 and MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:17.848860   21121 main.go:141] libmachine: (functional-982689) Calling .GetSSHPort
I0717 21:47:17.849045   21121 main.go:141] libmachine: (functional-982689) Calling .GetSSHKeyPath
I0717 21:47:17.849210   21121 main.go:141] libmachine: (functional-982689) Calling .GetSSHUsername
I0717 21:47:17.849362   21121 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/functional-982689/id_rsa Username:docker}
I0717 21:47:17.984457   21121 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:47:18.099195   21121 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.099211   21121 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.099477   21121 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.099500   21121 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:18.099509   21121 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.099517   21121 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.099750   21121 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.099763   21121 main.go:141] libmachine: (functional-982689) DBG | Closing plugin on server side
I0717 21:47:18.099768   21121 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982689 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.27.3            | sha256:7cffc0 | 31MB   |
| registry.k8s.io/kube-proxy                  | v1.27.3            | sha256:578054 | 23.9MB |
| docker.io/library/minikube-local-cache-test | functional-982689  | sha256:f0e14c | 1.01kB |
| docker.io/library/mysql                     | 5.7                | sha256:2be84d | 169MB  |
| docker.io/library/nginx                     | latest             | sha256:021283 | 70.6MB |
| gcr.io/google-containers/addon-resizer      | functional-982689  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:86b6af | 102MB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | alpine             | sha256:493752 | 17MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.27.3            | sha256:41697c | 18.2MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| registry.k8s.io/kube-apiserver              | v1.27.3            | sha256:08a0c9 | 33.4MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982689 image ls --format table --alsologtostderr:
I0717 21:47:18.653145   21255 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:18.653274   21255 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.653307   21255 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:18.653311   21255 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.653510   21255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
I0717 21:47:18.654064   21255 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.654155   21255 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.654462   21255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.654506   21255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.669733   21255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33023
I0717 21:47:18.670183   21255 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.670812   21255 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.670830   21255 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.671227   21255 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.671475   21255 main.go:141] libmachine: (functional-982689) Calling .GetState
I0717 21:47:18.673356   21255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.673399   21255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.687938   21255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
I0717 21:47:18.688383   21255 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.688895   21255 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.688921   21255 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.689369   21255 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.689586   21255 main.go:141] libmachine: (functional-982689) Calling .DriverName
I0717 21:47:18.689856   21255 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:18.689886   21255 main.go:141] libmachine: (functional-982689) Calling .GetSSHHostname
I0717 21:47:18.692672   21255 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.693210   21255 main.go:141] libmachine: (functional-982689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b9:9c", ip: ""} in network mk-functional-982689: {Iface:virbr1 ExpiryTime:2023-07-17 22:44:46 +0000 UTC Type:0 Mac:52:54:00:58:b9:9c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-982689 Clientid:01:52:54:00:58:b9:9c}
I0717 21:47:18.693277   21255 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined IP address 192.168.39.82 and MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.693402   21255 main.go:141] libmachine: (functional-982689) Calling .GetSSHPort
I0717 21:47:18.693566   21255 main.go:141] libmachine: (functional-982689) Calling .GetSSHKeyPath
I0717 21:47:18.693719   21255 main.go:141] libmachine: (functional-982689) Calling .GetSSHUsername
I0717 21:47:18.693843   21255 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/functional-982689/id_rsa Username:docker}
I0717 21:47:18.821183   21255 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:47:18.879495   21255 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.879519   21255 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.879807   21255 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.879837   21255 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:18.879855   21255 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.879868   21255 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.880119   21255 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.880139   21255 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:18.880143   21255 main.go:141] libmachine: (functional-982689) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982689 image ls --format json --alsologtostderr:
[{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-982689"],"size":"10823156"},{"id":"sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"18231737"},{"id":"sha256:4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16978757"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:2be84dd575ee2ecdb186dc43a9cd951890a764
d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"169282307"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}
,{"id":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"101639218"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f51791
53fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef"],"repoTags":["docker.io/library/nginx:latest"],"size":"70601656"},{"id":"sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"33364386"},{"id":"sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"30973055"},{"id":"sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["regist
ry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"23897400"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:f0e14c4ab8d39198a59c79bf040e02bdfaeed349130ed5d6471bcf67d21a1429","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-982689"],"size":"1006"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982689 image ls --format json --alsologtostderr:
I0717 21:47:18.396595   21208 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:18.396723   21208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.396732   21208 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:18.396737   21208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.396946   21208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
I0717 21:47:18.397540   21208 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.397643   21208 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.398034   21208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.398095   21208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.415758   21208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
I0717 21:47:18.416261   21208 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.416874   21208 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.416900   21208 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.417282   21208 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.417457   21208 main.go:141] libmachine: (functional-982689) Calling .GetState
I0717 21:47:18.419517   21208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.419559   21208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.435512   21208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38303
I0717 21:47:18.435871   21208 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.436350   21208 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.436370   21208 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.436744   21208 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.436909   21208 main.go:141] libmachine: (functional-982689) Calling .DriverName
I0717 21:47:18.437105   21208 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:18.437139   21208 main.go:141] libmachine: (functional-982689) Calling .GetSSHHostname
I0717 21:47:18.439429   21208 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.439742   21208 main.go:141] libmachine: (functional-982689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b9:9c", ip: ""} in network mk-functional-982689: {Iface:virbr1 ExpiryTime:2023-07-17 22:44:46 +0000 UTC Type:0 Mac:52:54:00:58:b9:9c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-982689 Clientid:01:52:54:00:58:b9:9c}
I0717 21:47:18.439771   21208 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined IP address 192.168.39.82 and MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.439959   21208 main.go:141] libmachine: (functional-982689) Calling .GetSSHPort
I0717 21:47:18.440129   21208 main.go:141] libmachine: (functional-982689) Calling .GetSSHKeyPath
I0717 21:47:18.440261   21208 main.go:141] libmachine: (functional-982689) Calling .GetSSHUsername
I0717 21:47:18.440382   21208 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/functional-982689/id_rsa Username:docker}
I0717 21:47:18.538337   21208 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:47:18.605420   21208 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.605437   21208 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.605715   21208 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.605747   21208 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:18.605760   21208 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.605754   21208 main.go:141] libmachine: (functional-982689) DBG | Closing plugin on server side
I0717 21:47:18.605775   21208 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.605971   21208 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.605983   21208 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982689 image ls --format yaml --alsologtostderr:
- id: sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "18231737"
- id: sha256:2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "169282307"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-982689
size: "10823156"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "33364386"
- id: sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "30973055"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "101639218"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
repoTags:
- docker.io/library/nginx:latest
size: "70601656"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
repoTags:
- docker.io/library/nginx:alpine
size: "16978757"
- id: sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "23897400"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:f0e14c4ab8d39198a59c79bf040e02bdfaeed349130ed5d6471bcf67d21a1429
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-982689
size: "1006"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982689 image ls --format yaml --alsologtostderr:
I0717 21:47:18.155496   21155 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:18.155620   21155 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.155633   21155 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:18.155639   21155 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.155931   21155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
I0717 21:47:18.156767   21155 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.156905   21155 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.157435   21155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.157506   21155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.174402   21155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
I0717 21:47:18.174969   21155 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.175682   21155 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.175712   21155 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.176115   21155 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.176305   21155 main.go:141] libmachine: (functional-982689) Calling .GetState
I0717 21:47:18.178220   21155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.178275   21155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.193159   21155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
I0717 21:47:18.193560   21155 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.194048   21155 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.194073   21155 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.194396   21155 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.194593   21155 main.go:141] libmachine: (functional-982689) Calling .DriverName
I0717 21:47:18.194805   21155 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:18.194838   21155 main.go:141] libmachine: (functional-982689) Calling .GetSSHHostname
I0717 21:47:18.197793   21155 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.198216   21155 main.go:141] libmachine: (functional-982689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b9:9c", ip: ""} in network mk-functional-982689: {Iface:virbr1 ExpiryTime:2023-07-17 22:44:46 +0000 UTC Type:0 Mac:52:54:00:58:b9:9c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-982689 Clientid:01:52:54:00:58:b9:9c}
I0717 21:47:18.198260   21155 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined IP address 192.168.39.82 and MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.198496   21155 main.go:141] libmachine: (functional-982689) Calling .GetSSHPort
I0717 21:47:18.198672   21155 main.go:141] libmachine: (functional-982689) Calling .GetSSHKeyPath
I0717 21:47:18.198815   21155 main.go:141] libmachine: (functional-982689) Calling .GetSSHUsername
I0717 21:47:18.198949   21155 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/functional-982689/id_rsa Username:docker}
I0717 21:47:18.310603   21155 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:47:18.343783   21155 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.343810   21155 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.344118   21155 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.344144   21155 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:18.344152   21155 main.go:141] libmachine: Making call to close driver server
I0717 21:47:18.344158   21155 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:18.344118   21155 main.go:141] libmachine: (functional-982689) DBG | Closing plugin on server side
I0717 21:47:18.344446   21155 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:18.344471   21155 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh pgrep buildkitd: exit status 1 (218.072514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image build -t localhost/my-image:functional-982689 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image build -t localhost/my-image:functional-982689 testdata/build --alsologtostderr: (3.98737949s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982689 image build -t localhost/my-image:functional-982689 testdata/build --alsologtostderr:
I0717 21:47:18.416629   21219 out.go:296] Setting OutFile to fd 1 ...
I0717 21:47:18.416765   21219 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.416775   21219 out.go:309] Setting ErrFile to fd 2...
I0717 21:47:18.416780   21219 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:47:18.417045   21219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
I0717 21:47:18.417771   21219 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.418273   21219 config.go:182] Loaded profile config "functional-982689": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 21:47:18.418620   21219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.418663   21219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.432573   21219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
I0717 21:47:18.433010   21219 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.433979   21219 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.434001   21219 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.435510   21219 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.435719   21219 main.go:141] libmachine: (functional-982689) Calling .GetState
I0717 21:47:18.437847   21219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 21:47:18.437892   21219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:47:18.453001   21219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
I0717 21:47:18.453423   21219 main.go:141] libmachine: () Calling .GetVersion
I0717 21:47:18.453900   21219 main.go:141] libmachine: Using API Version  1
I0717 21:47:18.453936   21219 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:47:18.454328   21219 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:47:18.454509   21219 main.go:141] libmachine: (functional-982689) Calling .DriverName
I0717 21:47:18.454710   21219 ssh_runner.go:195] Run: systemctl --version
I0717 21:47:18.454738   21219 main.go:141] libmachine: (functional-982689) Calling .GetSSHHostname
I0717 21:47:18.457398   21219 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.457840   21219 main.go:141] libmachine: (functional-982689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b9:9c", ip: ""} in network mk-functional-982689: {Iface:virbr1 ExpiryTime:2023-07-17 22:44:46 +0000 UTC Type:0 Mac:52:54:00:58:b9:9c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-982689 Clientid:01:52:54:00:58:b9:9c}
I0717 21:47:18.457867   21219 main.go:141] libmachine: (functional-982689) DBG | domain functional-982689 has defined IP address 192.168.39.82 and MAC address 52:54:00:58:b9:9c in network mk-functional-982689
I0717 21:47:18.458076   21219 main.go:141] libmachine: (functional-982689) Calling .GetSSHPort
I0717 21:47:18.458253   21219 main.go:141] libmachine: (functional-982689) Calling .GetSSHKeyPath
I0717 21:47:18.458390   21219 main.go:141] libmachine: (functional-982689) Calling .GetSSHUsername
I0717 21:47:18.458524   21219 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/functional-982689/id_rsa Username:docker}
I0717 21:47:18.584605   21219 build_images.go:151] Building image from path: /tmp/build.2855018910.tar
I0717 21:47:18.584677   21219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 21:47:18.619106   21219 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2855018910.tar
I0717 21:47:18.628584   21219 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2855018910.tar: stat -c "%s %y" /var/lib/minikube/build/build.2855018910.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2855018910.tar': No such file or directory
I0717 21:47:18.628614   21219 ssh_runner.go:362] scp /tmp/build.2855018910.tar --> /var/lib/minikube/build/build.2855018910.tar (3072 bytes)
I0717 21:47:18.671049   21219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2855018910
I0717 21:47:18.680906   21219 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2855018910 -xf /var/lib/minikube/build/build.2855018910.tar
I0717 21:47:18.691087   21219 containerd.go:378] Building image: /var/lib/minikube/build/build.2855018910
I0717 21:47:18.691155   21219 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2855018910 --local dockerfile=/var/lib/minikube/build/build.2855018910 --output type=image,name=localhost/my-image:functional-982689
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:b324290b968d9b14bd85fed13d1223edc24308f3d200ca76ae3ef28d91b0aa12 0.0s done
#8 exporting config sha256:ea724d2b7d5be96dfabb26c32a2402297b2d76210681046cdb4a6fa88501d2a4
#8 exporting config sha256:ea724d2b7d5be96dfabb26c32a2402297b2d76210681046cdb4a6fa88501d2a4 0.0s done
#8 naming to localhost/my-image:functional-982689 done
#8 DONE 0.2s
I0717 21:47:22.316022   21219 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2855018910 --local dockerfile=/var/lib/minikube/build/build.2855018910 --output type=image,name=localhost/my-image:functional-982689: (3.624835631s)
I0717 21:47:22.316091   21219 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2855018910
I0717 21:47:22.334159   21219 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2855018910.tar
I0717 21:47:22.355571   21219 build_images.go:207] Built localhost/my-image:functional-982689 from /tmp/build.2855018910.tar
I0717 21:47:22.355603   21219 build_images.go:123] succeeded building to: functional-982689
I0717 21:47:22.355609   21219 build_images.go:124] failed building to: 
I0717 21:47:22.355639   21219 main.go:141] libmachine: Making call to close driver server
I0717 21:47:22.355651   21219 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:22.355951   21219 main.go:141] libmachine: (functional-982689) DBG | Closing plugin on server side
I0717 21:47:22.355971   21219 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:22.355984   21219 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:47:22.356000   21219 main.go:141] libmachine: Making call to close driver server
I0717 21:47:22.356011   21219 main.go:141] libmachine: (functional-982689) Calling .Close
I0717 21:47:22.356212   21219 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:47:22.356235   21219 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.28279387s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-982689
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr: (4.097748939s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 19116: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-982689 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [98b70dbf-c99a-4578-9f8a-01239b9de734] Pending
helpers_test.go:344: "nginx-svc" [98b70dbf-c99a-4578-9f8a-01239b9de734] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [98b70dbf-c99a-4578-9f8a-01239b9de734] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.025771627s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr: (4.135219262s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.153053027s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-982689
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image load --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr: (5.05703068s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service list -o json
functional_test.go:1493: Took "395.409359ms" to run "out/minikube-linux-amd64 -p functional-982689 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.82:30509
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.82:30509
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-982689 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.219.114 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-982689 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "260.453588ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "40.3628ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "233.058757ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "41.021887ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdany-port3093393566/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689630412952182423" to /tmp/TestFunctionalparallelMountCmdany-port3093393566/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689630412952182423" to /tmp/TestFunctionalparallelMountCmdany-port3093393566/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689630412952182423" to /tmp/TestFunctionalparallelMountCmdany-port3093393566/001/test-1689630412952182423
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.6525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 21:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 21:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 21:46 test-1689630412952182423
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh cat /mount-9p/test-1689630412952182423
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-982689 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [80173623-6247-4951-9db5-7cf812b921e2] Pending
helpers_test.go:344: "busybox-mount" [80173623-6247-4951-9db5-7cf812b921e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [80173623-6247-4951-9db5-7cf812b921e2] Running
helpers_test.go:344: "busybox-mount" [80173623-6247-4951-9db5-7cf812b921e2] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [80173623-6247-4951-9db5-7cf812b921e2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.017065992s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-982689 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdany-port3093393566/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image save gcr.io/google-containers/addon-resizer:functional-982689 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image save gcr.io/google-containers/addon-resizer:functional-982689 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.500321139s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image rm gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.23757043s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-982689
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 image save --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-982689 image save --daemon gcr.io/google-containers/addon-resizer:functional-982689 --alsologtostderr: (1.293459642s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-982689
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdspecific-port50003724/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.313859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdspecific-port50003724/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "sudo umount -f /mount-9p": exit status 1 (217.253366ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-982689 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdspecific-port50003724/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T" /mount1: exit status 1 (254.485567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982689 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-982689 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2805754500/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-982689
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-982689
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-982689
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-126698 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0717 21:48:19.962087   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-126698 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m23.241794493s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons enable ingress --alsologtostderr -v=5: (12.512186875s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-126698 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-126698 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.127662556s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-126698 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-126698 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b1097960-d64f-4038-8c96-011fba82f5e3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b1097960-d64f-4038-8c96-011fba82f5e3] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.008767625s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-126698 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.207
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons disable ingress-dns --alsologtostderr -v=1: (6.750319358s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-126698 addons disable ingress --alsologtostderr -v=1: (7.577541799s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-891130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0717 21:50:36.116843   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-891130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m14.00664819s)
--- PASS: TestJSONOutput/start/Command (74.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-891130 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-891130 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-891130 --output=json --user=testUser
E0717 21:51:03.803194   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-891130 --output=json --user=testUser: (7.08593587s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-166572 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-166572 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.79511ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71a4d6b6-0cbb-4b3b-997a-aa70402300df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-166572] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bd37ca0-aadb-43c2-bb41-bc966b0b32f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"22f14e03-1e59-4da5-b481-8568362f5206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fb6a0197-a5ef-4962-9b97-eaf442c9c91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig"}}
	{"specversion":"1.0","id":"f9616000-6c88-44e0-bece-859e979610a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube"}}
	{"specversion":"1.0","id":"ee85f4a0-589b-4361-bf65-6264192e175b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"add5bc30-c366-4033-936d-df6c1c8595a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d4ecba33-133f-4de7-878c-d7acd9fe781c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-166572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-166572
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (101.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-071342 --driver=kvm2  --container-runtime=containerd
E0717 21:51:37.347833   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.353175   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.363467   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.383807   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.424086   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.504462   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.664987   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:37.985584   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:38.626635   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:39.907195   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:42.467938   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:51:47.588845   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-071342 --driver=kvm2  --container-runtime=containerd: (49.821750378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-073784 --driver=kvm2  --container-runtime=containerd
E0717 21:51:57.829722   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:52:18.310015   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-073784 --driver=kvm2  --container-runtime=containerd: (49.062384164s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-071342
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-073784
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-073784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-073784
helpers_test.go:175: Cleaning up "first-071342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-071342
--- PASS: TestMinikubeProfile (101.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-585962 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0717 21:52:59.271497   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-585962 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.711388332s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-585962 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-585962 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609281 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609281 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.718830229s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-585962 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-609281
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-609281: (1.156783494s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609281
E0717 21:54:05.425934   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.431185   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.441420   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.461652   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.501934   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.582309   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:05.742770   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:06.063312   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:06.704344   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:07.984638   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609281: (22.597667307s)
--- PASS: TestMountStart/serial/RestartStopped (23.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609281 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756389 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0717 21:54:15.666576   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:21.192472   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 21:54:25.906763   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:54:46.387862   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:55:27.348306   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:55:36.116188   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756389 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m48.366370499s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-756389 -- rollout status deployment/busybox: (3.219333702s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-58968 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-ztjxx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-58968 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-ztjxx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-58968 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-ztjxx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-58968 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-58968 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-ztjxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756389 -- exec busybox-67b7f59bb-ztjxx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756389 -v 3 --alsologtostderr
E0717 21:56:37.348169   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-756389 -v 3 --alsologtostderr: (40.965527175s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.54s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp testdata/cp-test.txt multinode-756389:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1757816324/001/cp-test_multinode-756389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389:/home/docker/cp-test.txt multinode-756389-m02:/home/docker/cp-test_multinode-756389_multinode-756389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test_multinode-756389_multinode-756389-m02.txt"
E0717 21:56:49.269101   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389:/home/docker/cp-test.txt multinode-756389-m03:/home/docker/cp-test_multinode-756389_multinode-756389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test_multinode-756389_multinode-756389-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp testdata/cp-test.txt multinode-756389-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1757816324/001/cp-test_multinode-756389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m02:/home/docker/cp-test.txt multinode-756389:/home/docker/cp-test_multinode-756389-m02_multinode-756389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test_multinode-756389-m02_multinode-756389.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m02:/home/docker/cp-test.txt multinode-756389-m03:/home/docker/cp-test_multinode-756389-m02_multinode-756389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test_multinode-756389-m02_multinode-756389-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp testdata/cp-test.txt multinode-756389-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1757816324/001/cp-test_multinode-756389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m03:/home/docker/cp-test.txt multinode-756389:/home/docker/cp-test_multinode-756389-m03_multinode-756389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389 "sudo cat /home/docker/cp-test_multinode-756389-m03_multinode-756389.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 cp multinode-756389-m03:/home/docker/cp-test.txt multinode-756389-m02:/home/docker/cp-test_multinode-756389-m03_multinode-756389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 ssh -n multinode-756389-m02 "sudo cat /home/docker/cp-test_multinode-756389-m03_multinode-756389-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-756389 node stop m03: (1.344732268s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756389 status: exit status 7 (426.64495ms)

                                                
                                                
-- stdout --
	multinode-756389
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756389-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756389-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr: exit status 7 (443.08155ms)

                                                
                                                
-- stdout --
	multinode-756389
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756389-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756389-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:56:56.271152   27417 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:56:56.271285   27417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:56:56.271293   27417 out.go:309] Setting ErrFile to fd 2...
	I0717 21:56:56.271297   27417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:56:56.271506   27417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 21:56:56.271664   27417 out.go:303] Setting JSON to false
	I0717 21:56:56.271687   27417 mustload.go:65] Loading cluster: multinode-756389
	I0717 21:56:56.271727   27417 notify.go:220] Checking for updates...
	I0717 21:56:56.272027   27417 config.go:182] Loaded profile config "multinode-756389": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 21:56:56.272039   27417 status.go:255] checking status of multinode-756389 ...
	I0717 21:56:56.272411   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.272468   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.289803   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0717 21:56:56.290240   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.290768   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.290792   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.291195   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.291400   27417 main.go:141] libmachine: (multinode-756389) Calling .GetState
	I0717 21:56:56.293074   27417 status.go:330] multinode-756389 host status = "Running" (err=<nil>)
	I0717 21:56:56.293090   27417 host.go:66] Checking if "multinode-756389" exists ...
	I0717 21:56:56.293433   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.293482   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.308243   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0717 21:56:56.308664   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.309151   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.309178   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.309494   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.309682   27417 main.go:141] libmachine: (multinode-756389) Calling .GetIP
	I0717 21:56:56.312437   27417 main.go:141] libmachine: (multinode-756389) DBG | domain multinode-756389 has defined MAC address 52:54:00:11:a3:df in network mk-multinode-756389
	I0717 21:56:56.312889   27417 main.go:141] libmachine: (multinode-756389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:a3:df", ip: ""} in network mk-multinode-756389: {Iface:virbr1 ExpiryTime:2023-07-17 22:54:26 +0000 UTC Type:0 Mac:52:54:00:11:a3:df Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-756389 Clientid:01:52:54:00:11:a3:df}
	I0717 21:56:56.312928   27417 main.go:141] libmachine: (multinode-756389) DBG | domain multinode-756389 has defined IP address 192.168.39.66 and MAC address 52:54:00:11:a3:df in network mk-multinode-756389
	I0717 21:56:56.313069   27417 host.go:66] Checking if "multinode-756389" exists ...
	I0717 21:56:56.313477   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.313553   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.330064   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0717 21:56:56.330496   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.330946   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.330969   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.331347   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.331613   27417 main.go:141] libmachine: (multinode-756389) Calling .DriverName
	I0717 21:56:56.331838   27417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:56:56.331862   27417 main.go:141] libmachine: (multinode-756389) Calling .GetSSHHostname
	I0717 21:56:56.334771   27417 main.go:141] libmachine: (multinode-756389) DBG | domain multinode-756389 has defined MAC address 52:54:00:11:a3:df in network mk-multinode-756389
	I0717 21:56:56.335194   27417 main.go:141] libmachine: (multinode-756389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:a3:df", ip: ""} in network mk-multinode-756389: {Iface:virbr1 ExpiryTime:2023-07-17 22:54:26 +0000 UTC Type:0 Mac:52:54:00:11:a3:df Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-756389 Clientid:01:52:54:00:11:a3:df}
	I0717 21:56:56.335227   27417 main.go:141] libmachine: (multinode-756389) DBG | domain multinode-756389 has defined IP address 192.168.39.66 and MAC address 52:54:00:11:a3:df in network mk-multinode-756389
	I0717 21:56:56.335367   27417 main.go:141] libmachine: (multinode-756389) Calling .GetSSHPort
	I0717 21:56:56.335533   27417 main.go:141] libmachine: (multinode-756389) Calling .GetSSHKeyPath
	I0717 21:56:56.335690   27417 main.go:141] libmachine: (multinode-756389) Calling .GetSSHUsername
	I0717 21:56:56.335806   27417 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/multinode-756389/id_rsa Username:docker}
	I0717 21:56:56.421609   27417 ssh_runner.go:195] Run: systemctl --version
	I0717 21:56:56.429031   27417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:56:56.444891   27417 kubeconfig.go:92] found "multinode-756389" server: "https://192.168.39.66:8443"
	I0717 21:56:56.444919   27417 api_server.go:166] Checking apiserver status ...
	I0717 21:56:56.444953   27417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:56:56.458958   27417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1094/cgroup
	I0717 21:56:56.468021   27417 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod6f352b428a7fdc83eb985b9e0f7c3fca/c811bbb3bb642b1b7df1d063211747adb97c1d1c30746bab8e1145d4e5c73f01"
	I0717 21:56:56.468096   27417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6f352b428a7fdc83eb985b9e0f7c3fca/c811bbb3bb642b1b7df1d063211747adb97c1d1c30746bab8e1145d4e5c73f01/freezer.state
	I0717 21:56:56.481772   27417 api_server.go:204] freezer state: "THAWED"
	I0717 21:56:56.481806   27417 api_server.go:253] Checking apiserver healthz at https://192.168.39.66:8443/healthz ...
	I0717 21:56:56.487171   27417 api_server.go:279] https://192.168.39.66:8443/healthz returned 200:
	ok
	I0717 21:56:56.487194   27417 status.go:421] multinode-756389 apiserver status = Running (err=<nil>)
	I0717 21:56:56.487202   27417 status.go:257] multinode-756389 status: &{Name:multinode-756389 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:56:56.487216   27417 status.go:255] checking status of multinode-756389-m02 ...
	I0717 21:56:56.487525   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.487558   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.502739   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0717 21:56:56.503237   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.503839   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.503859   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.504193   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.504368   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetState
	I0717 21:56:56.506291   27417 status.go:330] multinode-756389-m02 host status = "Running" (err=<nil>)
	I0717 21:56:56.506318   27417 host.go:66] Checking if "multinode-756389-m02" exists ...
	I0717 21:56:56.506625   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.506656   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.521971   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0717 21:56:56.522386   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.522842   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.522864   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.523129   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.523307   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetIP
	I0717 21:56:56.526181   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | domain multinode-756389-m02 has defined MAC address 52:54:00:6d:19:7b in network mk-multinode-756389
	I0717 21:56:56.526630   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:19:7b", ip: ""} in network mk-multinode-756389: {Iface:virbr1 ExpiryTime:2023-07-17 22:55:35 +0000 UTC Type:0 Mac:52:54:00:6d:19:7b Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-756389-m02 Clientid:01:52:54:00:6d:19:7b}
	I0717 21:56:56.526655   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | domain multinode-756389-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:6d:19:7b in network mk-multinode-756389
	I0717 21:56:56.526829   27417 host.go:66] Checking if "multinode-756389-m02" exists ...
	I0717 21:56:56.527098   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.527139   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.541766   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46369
	I0717 21:56:56.542217   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.542723   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.542745   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.543081   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.543324   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .DriverName
	I0717 21:56:56.543513   27417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:56:56.543540   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetSSHHostname
	I0717 21:56:56.546141   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | domain multinode-756389-m02 has defined MAC address 52:54:00:6d:19:7b in network mk-multinode-756389
	I0717 21:56:56.546556   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:19:7b", ip: ""} in network mk-multinode-756389: {Iface:virbr1 ExpiryTime:2023-07-17 22:55:35 +0000 UTC Type:0 Mac:52:54:00:6d:19:7b Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-756389-m02 Clientid:01:52:54:00:6d:19:7b}
	I0717 21:56:56.546590   27417 main.go:141] libmachine: (multinode-756389-m02) DBG | domain multinode-756389-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:6d:19:7b in network mk-multinode-756389
	I0717 21:56:56.546753   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetSSHPort
	I0717 21:56:56.546920   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetSSHKeyPath
	I0717 21:56:56.547078   27417 main.go:141] libmachine: (multinode-756389-m02) Calling .GetSSHUsername
	I0717 21:56:56.547229   27417 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-6542/.minikube/machines/multinode-756389-m02/id_rsa Username:docker}
	I0717 21:56:56.644650   27417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:56:56.657068   27417 status.go:257] multinode-756389-m02 status: &{Name:multinode-756389-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:56:56.657101   27417 status.go:255] checking status of multinode-756389-m03 ...
	I0717 21:56:56.657445   27417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 21:56:56.657478   27417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:56:56.672339   27417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0717 21:56:56.672757   27417 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:56:56.673218   27417 main.go:141] libmachine: Using API Version  1
	I0717 21:56:56.673264   27417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:56:56.673585   27417 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:56:56.673786   27417 main.go:141] libmachine: (multinode-756389-m03) Calling .GetState
	I0717 21:56:56.675256   27417 status.go:330] multinode-756389-m03 host status = "Stopped" (err=<nil>)
	I0717 21:56:56.675268   27417 status.go:343] host is not running, skipping remaining checks
	I0717 21:56:56.675273   27417 status.go:257] multinode-756389-m03 status: &{Name:multinode-756389-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 node start m03 --alsologtostderr
E0717 21:57:05.033395   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-756389 node start m03 --alsologtostderr: (26.759766097s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756389
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-756389
E0717 21:59:05.425980   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 21:59:33.109927   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:00:36.116413   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-756389: (3m14.717938558s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756389 --wait=true -v=8 --alsologtostderr
E0717 22:01:37.348022   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 22:01:59.164067   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756389 --wait=true -v=8 --alsologtostderr: (2m9.633005985s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756389
--- PASS: TestMultiNode/serial/RestartKeepsNodes (324.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-756389 node delete m03: (1.209257562s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 stop
E0717 22:04:05.425943   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:05:36.116167   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-756389 stop: (3m3.46215543s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756389 status: exit status 7 (74.201412ms)

                                                
                                                
-- stdout --
	multinode-756389
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756389-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr: exit status 7 (79.493947ms)

                                                
                                                
-- stdout --
	multinode-756389
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756389-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:05:53.823709   29613 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:05:53.823836   29613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:53.823846   29613 out.go:309] Setting ErrFile to fd 2...
	I0717 22:05:53.823853   29613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:53.824055   29613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 22:05:53.824253   29613 out.go:303] Setting JSON to false
	I0717 22:05:53.824284   29613 mustload.go:65] Loading cluster: multinode-756389
	I0717 22:05:53.824314   29613 notify.go:220] Checking for updates...
	I0717 22:05:53.824671   29613 config.go:182] Loaded profile config "multinode-756389": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:05:53.824688   29613 status.go:255] checking status of multinode-756389 ...
	I0717 22:05:53.825041   29613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 22:05:53.825166   29613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:53.843325   29613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0717 22:05:53.843692   29613 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:53.844238   29613 main.go:141] libmachine: Using API Version  1
	I0717 22:05:53.844264   29613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:53.844637   29613 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:53.844830   29613 main.go:141] libmachine: (multinode-756389) Calling .GetState
	I0717 22:05:53.846289   29613 status.go:330] multinode-756389 host status = "Stopped" (err=<nil>)
	I0717 22:05:53.846303   29613 status.go:343] host is not running, skipping remaining checks
	I0717 22:05:53.846310   29613 status.go:257] multinode-756389 status: &{Name:multinode-756389 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:05:53.846329   29613 status.go:255] checking status of multinode-756389-m02 ...
	I0717 22:05:53.846581   29613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0717 22:05:53.846613   29613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:53.860320   29613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43967
	I0717 22:05:53.860640   29613 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:53.861099   29613 main.go:141] libmachine: Using API Version  1
	I0717 22:05:53.861124   29613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:53.861431   29613 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:53.861632   29613 main.go:141] libmachine: (multinode-756389-m02) Calling .GetState
	I0717 22:05:53.863217   29613 status.go:330] multinode-756389-m02 host status = "Stopped" (err=<nil>)
	I0717 22:05:53.863229   29613 status.go:343] host is not running, skipping remaining checks
	I0717 22:05:53.863234   29613 status.go:257] multinode-756389-m02 status: &{Name:multinode-756389-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756389 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0717 22:06:37.348174   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756389 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m32.109203464s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756389 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (92.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756389
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756389-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-756389-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (59.147175ms)

                                                
                                                
-- stdout --
	* [multinode-756389-m02] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-756389-m02' is duplicated with machine name 'multinode-756389-m02' in profile 'multinode-756389'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756389-m03 --driver=kvm2  --container-runtime=containerd
E0717 22:08:00.393978   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756389-m03 --driver=kvm2  --container-runtime=containerd: (48.282684525s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756389
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-756389: exit status 80 (220.784683ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-756389
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-756389-m03 already exists in multinode-756389-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-756389-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.58s)

                                                
                                    
x
+
TestPreload (240.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-049273 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0717 22:09:05.425174   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-049273 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m25.72035442s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-049273 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-049273 image pull gcr.io/k8s-minikube/busybox: (1.472244802s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-049273
E0717 22:10:28.470753   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:10:36.116147   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-049273: (1m31.782452035s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-049273 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0717 22:11:37.348569   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-049273 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (59.990081199s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-049273 image list
helpers_test.go:175: Cleaning up "test-preload-049273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-049273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-049273: (1.022326842s)
--- PASS: TestPreload (240.20s)

                                                
                                    
x
+
TestScheduledStopUnix (120.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-912301 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-912301 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.504925441s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912301 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-912301 -n scheduled-stop-912301
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912301 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912301 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912301 -n scheduled-stop-912301
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912301
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912301 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 22:14:05.425177   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912301
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-912301: exit status 7 (57.92797ms)

                                                
                                                
-- stdout --
	scheduled-stop-912301
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912301 -n scheduled-stop-912301
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912301 -n scheduled-stop-912301: exit status 7 (63.178508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-912301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-912301
--- PASS: TestScheduledStopUnix (120.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (260.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.3285888907.exe start -p running-upgrade-185744 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.3285888907.exe start -p running-upgrade-185744 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m48.585943022s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-185744 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-185744 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m29.345388044s)
helpers_test.go:175: Cleaning up "running-upgrade-185744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-185744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-185744: (1.856692723s)
--- PASS: TestRunningBinaryUpgrade (260.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (165.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0717 22:15:36.116123   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m44.606257081s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-722551
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-722551: (2.11237261s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-722551 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-722551 status --format={{.Host}}: exit status 7 (87.672583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (48.451290375s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-722551 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (101.785191ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-722551] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-722551
	    minikube start -p kubernetes-upgrade-722551 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7225512 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-722551 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-722551 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (8.972883211s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-722551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-722551
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-722551: (1.184683257s)
--- PASS: TestKubernetesUpgrade (165.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (77.121701ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-161622] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161622 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161622 --driver=kvm2  --container-runtime=containerd: (1m44.047051542s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-161622 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-925513 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-925513 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (102.129797ms)

                                                
                                                
-- stdout --
	* [false-925513] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:15:12.247879   34057 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:15:12.247989   34057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:15:12.247999   34057 out.go:309] Setting ErrFile to fd 2...
	I0717 22:15:12.248003   34057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:15:12.248215   34057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-6542/.minikube/bin
	I0717 22:15:12.248760   34057 out.go:303] Setting JSON to false
	I0717 22:15:12.253718   34057 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3464,"bootTime":1689628648,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:15:12.253786   34057 start.go:138] virtualization: kvm guest
	I0717 22:15:12.256312   34057 out.go:177] * [false-925513] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:15:12.257917   34057 notify.go:220] Checking for updates...
	I0717 22:15:12.257933   34057 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:15:12.259788   34057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:15:12.261331   34057 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-6542/kubeconfig
	I0717 22:15:12.263997   34057 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-6542/.minikube
	I0717 22:15:12.265325   34057 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:15:12.266676   34057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:15:12.268454   34057 config.go:182] Loaded profile config "NoKubernetes-161622": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:15:12.268573   34057 config.go:182] Loaded profile config "offline-containerd-134225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 22:15:12.268648   34057 config.go:182] Loaded profile config "running-upgrade-185744": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 22:15:12.268735   34057 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:15:12.304821   34057 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 22:15:12.306248   34057 start.go:298] selected driver: kvm2
	I0717 22:15:12.306263   34057 start.go:880] validating driver "kvm2" against <nil>
	I0717 22:15:12.306278   34057 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:15:12.308509   34057 out.go:177] 
	W0717 22:15:12.309898   34057 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0717 22:15:12.311223   34057 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-925513 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-925513

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-925513"

                                                
                                                
----------------------- debugLogs end: false-925513 [took: 3.13442825s] --------------------------------
helpers_test.go:175: Cleaning up "false-925513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-925513
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (45.436157281s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-161622 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-161622 status -o json: exit status 2 (228.233096ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-161622","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-161622
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-161622: (1.014805036s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161622 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.459642362s)
--- PASS: TestNoKubernetes/serial/Start (30.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-161622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-161622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.389024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.59888387s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.13005139s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-161622
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-161622: (1.247938201s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161622 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161622 --driver=kvm2  --container-runtime=containerd: (37.609445424s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-161622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-161622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.402949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (176.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.500160971.exe start -p stopped-upgrade-040115 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.500160971.exe start -p stopped-upgrade-040115 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m49.295053741s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.500160971.exe -p stopped-upgrade-040115 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.500160971.exe -p stopped-upgrade-040115 stop: (3.134870017s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-040115 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-040115 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m4.556675089s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (176.99s)

                                                
                                    
x
+
TestPause/serial/Start (128.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-393014 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0717 22:19:05.425400   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-393014 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m8.017127155s)
--- PASS: TestPause/serial/Start (128.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m45.66976273s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m11.746851133s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-040115
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (9.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-393014 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-393014 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (9.3929079s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m43.113338297s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.11s)

                                                
                                    
x
+
TestPause/serial/Pause (1.22s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-393014 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-393014 --alsologtostderr -v=5: (1.223241287s)
--- PASS: TestPause/serial/Pause (1.22s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-393014 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-393014 --output=json --layout=cluster: exit status 2 (269.836307ms)

                                                
                                                
-- stdout --
	{"Name":"pause-393014","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-393014","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-393014 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-393014 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-393014 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-393014 --alsologtostderr -v=5: (1.049601386s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m54.840119329s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-925513 "pgrep -a kubelet"
E0717 22:21:37.348373   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-bmh28" [2c758c01-889a-4b89-98d1-08435a4eb14f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-bmh28" [2c758c01-889a-4b89-98d1-08435a4eb14f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.01058765s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-flcql" [c1ad90d4-d0b5-4d0b-aa37-d6d948d3f4aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.034487916s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-t26tx" [8531f1e9-6c67-42ac-a863-e8893e492562] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-t26tx" [8531f1e9-6c67-42ac-a863-e8893e492562] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008726886s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m17.358643731s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (95.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m35.810966561s)
--- PASS: TestNetworkPlugins/group/flannel/Start (95.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ts2d5" [75e47a29-4465-40f8-b446-d086fbcec478] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028673935s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sntf6" [3ec702c6-fd05-441a-b4b7-9744b6fd43ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-sntf6" [3ec702c6-fd05-441a-b4b7-9744b6fd43ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.009552622s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5spmt" [e39b70af-fe63-4ecf-a714-8742ee6180c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-5spmt" [e39b70af-fe63-4ecf-a714-8742ee6180c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.008856315s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-4zhvp" [edfb104f-8a42-4fa8-80d0-3f1be96f21c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-4zhvp" [edfb104f-8a42-4fa8-80d0-3f1be96f21c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009409856s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-925513 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m48.800912409s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-766710 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-766710 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m18.555448278s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-541969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:24:05.424208   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-541969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m56.648217811s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-blxlz" [84968725-64ad-4633-bc52-f48fec357fb6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.102130883s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wntkc" [b44739fc-522f-4ace-b8a7-15a08ae76a37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wntkc" [b44739fc-522f-4ace-b8a7-15a08ae76a37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.012297386s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-280258 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-280258 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m15.529178964s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-925513 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-925513 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-62wbw" [65858e92-d9bd-4780-98ec-64bd039452ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-62wbw" [65858e92-d9bd-4780-98ec-64bd039452ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.008117338s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-925513 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-925513 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)
E0717 22:30:17.899487   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:17.904742   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:17.914995   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:17.935276   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:17.975629   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:18.056010   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:18.216426   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:18.537196   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:19.178112   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:20.459293   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:23.019654   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:28.140319   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:29.304634   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:30:36.116360   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 22:30:36.143621   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:30:38.380512   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:30:58.860918   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:31:01.302647   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:31:05.537364   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.542647   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.552986   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.573300   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.613627   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.694326   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:05.854772   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:06.175723   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:06.816415   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:06.913929   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:31:08.096938   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:10.657312   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:15.777553   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:26.018744   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m1.414905671s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-541969 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ef9ac637-cce9-4303-b6c4-7648a06e2678] Pending
helpers_test.go:344: "busybox" [ef9ac637-cce9-4303-b6c4-7648a06e2678] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ef9ac637-cce9-4303-b6c4-7648a06e2678] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.029969935s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-541969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-280258 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aec76e2a-2415-4f19-9b75-90e3c26af548] Pending
helpers_test.go:344: "busybox" [aec76e2a-2415-4f19-9b75-90e3c26af548] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aec76e2a-2415-4f19-9b75-90e3c26af548] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.026997766s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-280258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-541969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-541969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.504167992s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-541969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-541969 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-541969 --alsologtostderr -v=3: (1m31.93551551s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-766710 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9ab79efc-cafc-4939-8ed5-555a9376581b] Pending
helpers_test.go:344: "busybox" [9ab79efc-cafc-4939-8ed5-555a9376581b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9ab79efc-cafc-4939-8ed5-555a9376581b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.031463479s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-766710 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-280258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-280258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.145755509s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-280258 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-280258 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-280258 --alsologtostderr -v=3: (1m32.322676104s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-766710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-766710 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-766710 --alsologtostderr -v=3
E0717 22:26:37.348225   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 22:26:37.772724   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:37.778040   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:37.788310   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:37.808572   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:37.848867   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:37.929206   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:38.089505   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:38.410147   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:39.050859   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:40.331736   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:42.892669   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:48.013130   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-766710 --alsologtostderr -v=3: (1m32.555143791s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594093 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594093 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.386351958s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-594093 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-594093 --alsologtostderr -v=3: (3.08731556s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594093 -n newest-cni-594093
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594093 -n newest-cni-594093: exit status 7 (58.512554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-594093 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:26:56.048253   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.053582   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.063858   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.084237   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.125303   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.205605   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.366200   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:56.686759   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:57.327725   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:26:58.254169   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:26:58.607852   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:27:01.168237   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:27:06.289016   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:27:08.471760   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:27:16.530236   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:27:18.735307   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (44.764563776s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594093 -n newest-cni-594093
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541969 -n no-preload-541969
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541969 -n no-preload-541969: exit status 7 (66.952617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-541969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (604.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-541969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:27:37.011001   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-541969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (10m4.679632448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541969 -n no-preload-541969
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (604.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-594093 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-594093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594093 -n newest-cni-594093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594093 -n newest-cni-594093: exit status 2 (259.570345ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594093 -n newest-cni-594093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594093 -n newest-cni-594093: exit status 2 (253.688439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-594093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594093 -n newest-cni-594093
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594093 -n newest-cni-594093
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258: exit status 7 (102.707766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-280258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (615.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-280258 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-280258 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (10m14.930566997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (615.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (131.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-392439 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-392439 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (2m11.533506532s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (131.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-766710 -n old-k8s-version-766710
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-766710 -n old-k8s-version-766710: exit status 7 (58.871738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-766710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (112.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-766710 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0717 22:27:52.300401   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.305673   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.315933   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.336201   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.376516   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.457429   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.618144   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:52.938258   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:53.579182   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:54.859448   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:57.420330   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:27:59.695587   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:28:02.541145   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:28:12.782294   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:28:17.456592   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.461872   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.472127   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.492500   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.533646   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.614629   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.775209   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:17.971707   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:28:18.096210   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:18.737138   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:20.018300   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:22.578989   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:23.070019   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.075338   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.085614   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.105899   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.146217   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.226570   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.387029   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:23.708074   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:24.348826   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:25.629567   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:27.699881   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:28.190462   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:33.263002   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:28:33.311226   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:37.940659   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:28:43.551637   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:28:58.421620   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:29:04.032568   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:29:05.424988   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:29:07.382563   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.387853   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.398133   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.418416   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.458864   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.539310   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:07.700371   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:08.020728   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:08.660887   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:09.941092   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:12.501349   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:14.223172   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:29:17.622480   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:21.616557   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:29:27.863055   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:29:39.382087   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:29:39.892845   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-766710 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m52.401689883s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-766710 -n old-k8s-version-766710
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (112.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 22:29:44.993340   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lvmtz" [10ab5abd-e655-46e3-9525-833e1b00b991] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 22:29:48.344184   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lvmtz" [10ab5abd-e655-46e3-9525-833e1b00b991] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.019480714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-392439 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c9b2e08-6e73-418f-82bc-e3f6c0008dcf] Pending
helpers_test.go:344: "busybox" [2c9b2e08-6e73-418f-82bc-e3f6c0008dcf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c9b2e08-6e73-418f-82bc-e3f6c0008dcf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.029913773s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-392439 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lvmtz" [10ab5abd-e655-46e3-9525-833e1b00b991] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008752753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-766710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-392439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-392439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1375315s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-392439 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-392439 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-392439 --alsologtostderr -v=3: (1m32.185150653s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-766710 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-766710 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-766710 -n old-k8s-version-766710
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-766710 -n old-k8s-version-766710: exit status 2 (253.879167ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-766710 -n old-k8s-version-766710
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-766710 -n old-k8s-version-766710: exit status 2 (250.283737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-766710 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-766710 -n old-k8s-version-766710
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-766710 -n old-k8s-version-766710
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-392439 -n embed-certs-392439
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-392439 -n embed-certs-392439: exit status 7 (58.091442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-392439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (307.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-392439 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 22:31:37.348561   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 22:31:37.773404   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:31:39.821282   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:31:46.499402   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:31:51.225247   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:31:56.048887   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:32:05.457606   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
E0717 22:32:23.733613   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
E0717 22:32:27.460158   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:32:52.300084   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:33:01.742213   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:33:17.456963   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:33:19.984452   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/calico-925513/client.crt: no such file or directory
E0717 22:33:23.069570   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:33:45.143295   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/custom-flannel-925513/client.crt: no such file or directory
E0717 22:33:49.381298   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:33:50.755078   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/enable-default-cni-925513/client.crt: no such file or directory
E0717 22:34:05.424964   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/ingress-addon-legacy-126698/client.crt: no such file or directory
E0717 22:34:07.381975   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:34:35.066482   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/flannel-925513/client.crt: no such file or directory
E0717 22:35:17.899088   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:35:19.166305   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 22:35:36.116355   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/addons-061866/client.crt: no such file or directory
E0717 22:35:45.582616   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/bridge-925513/client.crt: no such file or directory
E0717 22:36:05.537986   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:36:33.221519   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/old-k8s-version-766710/client.crt: no such file or directory
E0717 22:36:37.348130   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/functional-982689/client.crt: no such file or directory
E0717 22:36:37.772951   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/auto-925513/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-392439 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m7.283372845s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-392439 -n embed-certs-392439
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (307.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-rnl97" [fc552216-3b98-443f-81d7-9fc071f6147f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019070906s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-rnl97" [fc552216-3b98-443f-81d7-9fc071f6147f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009151561s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-392439 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-392439 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-392439 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-392439 -n embed-certs-392439
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-392439 -n embed-certs-392439: exit status 2 (241.830469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-392439 -n embed-certs-392439
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-392439 -n embed-certs-392439: exit status 2 (249.436369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-392439 --alsologtostderr -v=1
E0717 22:36:56.048210   13797 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-6542/.minikube/profiles/kindnet-925513/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-392439 -n embed-certs-392439
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-392439 -n embed-certs-392439
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lqqhx" [e3cb49a7-a472-4a5c-bec0-2a591b4ce799] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016615336s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lqqhx" [e3cb49a7-a472-4a5c-bec0-2a591b4ce799] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010197599s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-541969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-541969 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-541969 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541969 -n no-preload-541969
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541969 -n no-preload-541969: exit status 2 (232.234296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-541969 -n no-preload-541969
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-541969 -n no-preload-541969: exit status 2 (239.097214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-541969 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541969 -n no-preload-541969
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-541969 -n no-preload-541969
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-7d76r" [8aa798c0-32d2-49d0-822e-15f91d044371] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016118412s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-7d76r" [8aa798c0-32d2-49d0-822e-15f91d044371] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008508897s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-280258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-280258 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-280258 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258: exit status 2 (245.425608ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258: exit status 2 (240.150242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-280258 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-280258 -n default-k8s-diff-port-280258
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                    

Test skip (31/303)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-925513 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-925513

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-925513"

                                                
                                                
----------------------- debugLogs end: kubenet-925513 [took: 2.660314969s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-925513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-925513
--- SKIP: TestNetworkPlugins/group/kubenet (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-925513 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-925513" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-925513

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-925513" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925513"

                                                
                                                
----------------------- debugLogs end: cilium-925513 [took: 5.025530744s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-925513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-925513
--- SKIP: TestNetworkPlugins/group/cilium (5.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-003685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-003685
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard