Test Report: Docker_Linux_containerd_arm64 17586

                    
                      d1a75fe08206deb6fc1cd915add724f43e3a5600:2023-11-09:31801
                    
                

Test fail (12/306)

x
+
TestAddons/parallel/Ingress (38.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-118967 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-118967 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-118967 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [838183db-7533-4980-ac6f-7e5dfd7ecf95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [838183db-7533-4980-ac6f-7e5dfd7ecf95] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.015547398s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-118967 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.054807563s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 addons disable ingress-dns --alsologtostderr -v=1: (1.037692489s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 addons disable ingress --alsologtostderr -v=1: (7.793804682s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-118967
helpers_test.go:235: (dbg) docker inspect addons-118967:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c",
	        "Created": "2023-11-08T23:36:03.436036748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 755862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-08T23:36:03.77668061Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c/hostname",
	        "HostsPath": "/var/lib/docker/containers/129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c/hosts",
	        "LogPath": "/var/lib/docker/containers/129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c/129d2e229b594dfd4e8c5b32d6b0a526c617a58e08a29fefa70806853ce8737c-json.log",
	        "Name": "/addons-118967",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-118967:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-118967",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/be1739f286d8b00f4e2bae8dc5171194043e39632e6ebfe079c02ef5561d0b74-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be1739f286d8b00f4e2bae8dc5171194043e39632e6ebfe079c02ef5561d0b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be1739f286d8b00f4e2bae8dc5171194043e39632e6ebfe079c02ef5561d0b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be1739f286d8b00f4e2bae8dc5171194043e39632e6ebfe079c02ef5561d0b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-118967",
	                "Source": "/var/lib/docker/volumes/addons-118967/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-118967",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-118967",
	                "name.minikube.sigs.k8s.io": "addons-118967",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9465a4243d804b1c00102711cf97984c2331b5f48c6948bcbf70085e8b6e203d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33701"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33698"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33699"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9465a4243d80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-118967": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "129d2e229b59",
	                        "addons-118967"
	                    ],
	                    "NetworkID": "45e24fb609988b4856cd5cba88cf4946d9343f743cb200329e52a3cd41c8c45a",
	                    "EndpointID": "2e11a0d41a2b46ba9808b6dce93b0c069bbd8e01e1c1644da1c87844927ca27b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-118967 -n addons-118967
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 logs -n 25: (1.650937386s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:35 UTC |
	| delete  | -p download-only-282555                                                                     | download-only-282555   | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:35 UTC |
	| delete  | -p download-only-282555                                                                     | download-only-282555   | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-403562 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |                     |
	|         | download-docker-403562                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-403562                                                                   | download-docker-403562 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-304793   | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |                     |
	|         | binary-mirror-304793                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39905                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-304793                                                                     | binary-mirror-304793   | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |                     |
	|         | addons-118967                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |                     |
	|         | addons-118967                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-118967 --wait=true                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC | 08 Nov 23 23:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-118967 ip                                                                            | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	| addons  | addons-118967 addons disable                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | -p addons-118967                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-118967 ssh cat                                                                       | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-cbcd01e2-78c3-4735-bcd6-14fcf27e46a7_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-118967 addons disable                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118967 addons                                                                        | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118967 addons                                                                        | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | addons-118967                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:38 UTC | 08 Nov 23 23:38 UTC |
	|         | -p addons-118967                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-118967 addons                                                                        | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	|         | addons-118967                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-118967 ssh curl -s                                                                   | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-118967 ip                                                                            | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	| addons  | addons-118967 addons disable                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-118967 addons disable                                                                | addons-118967          | jenkins | v1.32.0 | 08 Nov 23 23:39 UTC | 08 Nov 23 23:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:35:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:35:38.690284  755400 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:35:38.690498  755400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:38.690509  755400 out.go:309] Setting ErrFile to fd 2...
	I1108 23:35:38.690517  755400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:38.690777  755400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:35:38.691195  755400 out.go:303] Setting JSON to false
	I1108 23:35:38.692145  755400 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22688,"bootTime":1699463851,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:35:38.692221  755400 start.go:138] virtualization:  
	I1108 23:35:38.694381  755400 out.go:177] * [addons-118967] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 23:35:38.696853  755400 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:35:38.698628  755400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:35:38.696951  755400 notify.go:220] Checking for updates...
	I1108 23:35:38.700655  755400 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:35:38.702275  755400 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:35:38.704140  755400 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 23:35:38.705881  755400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:35:38.707785  755400 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:35:38.731225  755400 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:35:38.731323  755400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:38.810669  755400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-08 23:35:38.801052945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:38.810770  755400 docker.go:295] overlay module found
	I1108 23:35:38.812725  755400 out.go:177] * Using the docker driver based on user configuration
	I1108 23:35:38.814393  755400 start.go:298] selected driver: docker
	I1108 23:35:38.814413  755400 start.go:902] validating driver "docker" against <nil>
	I1108 23:35:38.814426  755400 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:35:38.815060  755400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:38.899837  755400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-08 23:35:38.890046197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:38.900009  755400 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1108 23:35:38.900228  755400 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:35:38.901942  755400 out.go:177] * Using Docker driver with root privileges
	I1108 23:35:38.903359  755400 cni.go:84] Creating CNI manager for ""
	I1108 23:35:38.903377  755400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:35:38.903404  755400 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 23:35:38.903421  755400 start_flags.go:323] config:
	{Name:addons-118967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-118967 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:38.905259  755400 out.go:177] * Starting control plane node addons-118967 in cluster addons-118967
	I1108 23:35:38.906939  755400 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1108 23:35:38.908677  755400 out.go:177] * Pulling base image ...
	I1108 23:35:38.910895  755400 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:38.910952  755400 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1108 23:35:38.910964  755400 cache.go:56] Caching tarball of preloaded images
	I1108 23:35:38.911045  755400 preload.go:174] Found /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1108 23:35:38.911067  755400 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:35:38.911423  755400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/config.json ...
	I1108 23:35:38.911453  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/config.json: {Name:mk20a3a69dd8becbd8939b16f9f728aa65af9512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:35:38.911614  755400 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1108 23:35:38.928145  755400 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1108 23:35:38.928273  755400 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1108 23:35:38.928298  755400 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1108 23:35:38.928307  755400 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1108 23:35:38.928315  755400 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1108 23:35:38.928325  755400 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1108 23:35:55.505867  755400 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1108 23:35:55.505912  755400 cache.go:194] Successfully downloaded all kic artifacts
	I1108 23:35:55.505941  755400 start.go:365] acquiring machines lock for addons-118967: {Name:mk80ffe09f00e0b55a9720d7f901b41e197f0cff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:35:55.506419  755400 start.go:369] acquired machines lock for "addons-118967" in 450.451µs
	I1108 23:35:55.506483  755400 start.go:93] Provisioning new machine with config: &{Name:addons-118967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-118967 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:35:55.506569  755400 start.go:125] createHost starting for "" (driver="docker")
	I1108 23:35:55.508991  755400 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1108 23:35:55.509283  755400 start.go:159] libmachine.API.Create for "addons-118967" (driver="docker")
	I1108 23:35:55.509325  755400 client.go:168] LocalClient.Create starting
	I1108 23:35:55.509464  755400 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem
	I1108 23:35:56.584030  755400 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem
	I1108 23:35:57.353470  755400 cli_runner.go:164] Run: docker network inspect addons-118967 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 23:35:57.370570  755400 cli_runner.go:211] docker network inspect addons-118967 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 23:35:57.370663  755400 network_create.go:281] running [docker network inspect addons-118967] to gather additional debugging logs...
	I1108 23:35:57.370684  755400 cli_runner.go:164] Run: docker network inspect addons-118967
	W1108 23:35:57.387684  755400 cli_runner.go:211] docker network inspect addons-118967 returned with exit code 1
	I1108 23:35:57.387731  755400 network_create.go:284] error running [docker network inspect addons-118967]: docker network inspect addons-118967: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-118967 not found
	I1108 23:35:57.387747  755400 network_create.go:286] output of [docker network inspect addons-118967]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-118967 not found
	
	** /stderr **
	I1108 23:35:57.387877  755400 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 23:35:57.405842  755400 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002552850}
	I1108 23:35:57.405893  755400 network_create.go:124] attempt to create docker network addons-118967 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 23:35:57.405974  755400 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-118967 addons-118967
	I1108 23:35:57.481716  755400 network_create.go:108] docker network addons-118967 192.168.49.0/24 created
	I1108 23:35:57.481753  755400 kic.go:121] calculated static IP "192.168.49.2" for the "addons-118967" container
	I1108 23:35:57.481833  755400 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 23:35:57.499090  755400 cli_runner.go:164] Run: docker volume create addons-118967 --label name.minikube.sigs.k8s.io=addons-118967 --label created_by.minikube.sigs.k8s.io=true
	I1108 23:35:57.521528  755400 oci.go:103] Successfully created a docker volume addons-118967
	I1108 23:35:57.521635  755400 cli_runner.go:164] Run: docker run --rm --name addons-118967-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118967 --entrypoint /usr/bin/test -v addons-118967:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1108 23:35:58.795095  755400 cli_runner.go:217] Completed: docker run --rm --name addons-118967-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118967 --entrypoint /usr/bin/test -v addons-118967:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.273398592s)
	I1108 23:35:58.795125  755400 oci.go:107] Successfully prepared a docker volume addons-118967
	I1108 23:35:58.795147  755400 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:58.795169  755400 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 23:35:58.795258  755400 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-118967:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 23:36:03.352195  755400 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-118967:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.5568907s)
	I1108 23:36:03.352232  755400 kic.go:203] duration metric: took 4.557063 seconds to extract preloaded images to volume
	W1108 23:36:03.352379  755400 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 23:36:03.352499  755400 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 23:36:03.419625  755400 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-118967 --name addons-118967 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-118967 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-118967 --network addons-118967 --ip 192.168.49.2 --volume addons-118967:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1108 23:36:03.786528  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Running}}
	I1108 23:36:03.808628  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:03.833479  755400 cli_runner.go:164] Run: docker exec addons-118967 stat /var/lib/dpkg/alternatives/iptables
	I1108 23:36:03.911425  755400 oci.go:144] the created container "addons-118967" has a running status.
	I1108 23:36:03.911451  755400 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa...
	I1108 23:36:04.675953  755400 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 23:36:04.711045  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:04.743324  755400 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 23:36:04.743351  755400 kic_runner.go:114] Args: [docker exec --privileged addons-118967 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 23:36:04.824895  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:04.850892  755400 machine.go:88] provisioning docker machine ...
	I1108 23:36:04.850925  755400 ubuntu.go:169] provisioning hostname "addons-118967"
	I1108 23:36:04.850992  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:04.872850  755400 main.go:141] libmachine: Using SSH client type: native
	I1108 23:36:04.873285  755400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33702 <nil> <nil>}
	I1108 23:36:04.873305  755400 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-118967 && echo "addons-118967" | sudo tee /etc/hostname
	I1108 23:36:05.037863  755400 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-118967
	
	I1108 23:36:05.037969  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:05.065232  755400 main.go:141] libmachine: Using SSH client type: native
	I1108 23:36:05.065733  755400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33702 <nil> <nil>}
	I1108 23:36:05.065754  755400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-118967' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-118967/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-118967' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:36:05.206790  755400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:36:05.206860  755400 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17586-749551/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-749551/.minikube}
	I1108 23:36:05.206910  755400 ubuntu.go:177] setting up certificates
	I1108 23:36:05.206952  755400 provision.go:83] configureAuth start
	I1108 23:36:05.207050  755400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118967
	I1108 23:36:05.226347  755400 provision.go:138] copyHostCerts
	I1108 23:36:05.226441  755400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1108 23:36:05.226562  755400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1108 23:36:05.226621  755400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1108 23:36:05.226668  755400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.addons-118967 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-118967]
	I1108 23:36:05.541556  755400 provision.go:172] copyRemoteCerts
	I1108 23:36:05.541627  755400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:36:05.541675  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:05.559340  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:05.652349  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:36:05.681730  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1108 23:36:05.710711  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:36:05.740636  755400 provision.go:86] duration metric: configureAuth took 533.654841ms
	I1108 23:36:05.740704  755400 ubuntu.go:193] setting minikube options for container-runtime
	I1108 23:36:05.740911  755400 config.go:182] Loaded profile config "addons-118967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:36:05.740925  755400 machine.go:91] provisioned docker machine in 890.012604ms
	I1108 23:36:05.740938  755400 client.go:171] LocalClient.Create took 10.231600492s
	I1108 23:36:05.740964  755400 start.go:167] duration metric: libmachine.API.Create for "addons-118967" took 10.23168055s
	I1108 23:36:05.740976  755400 start.go:300] post-start starting for "addons-118967" (driver="docker")
	I1108 23:36:05.740988  755400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:36:05.741056  755400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:36:05.741103  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:05.759191  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:05.852868  755400 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:36:05.857226  755400 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 23:36:05.857268  755400 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 23:36:05.857285  755400 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 23:36:05.857297  755400 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1108 23:36:05.857309  755400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-749551/.minikube/addons for local assets ...
	I1108 23:36:05.857384  755400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-749551/.minikube/files for local assets ...
	I1108 23:36:05.857414  755400 start.go:303] post-start completed in 116.432008ms
	I1108 23:36:05.857759  755400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118967
	I1108 23:36:05.876199  755400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/config.json ...
	I1108 23:36:05.876490  755400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:36:05.876542  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:05.894163  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:05.987632  755400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 23:36:05.993655  755400 start.go:128] duration metric: createHost completed in 10.487067911s
	I1108 23:36:05.993681  755400 start.go:83] releasing machines lock for "addons-118967", held for 10.487242031s
	I1108 23:36:05.993759  755400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-118967
	I1108 23:36:06.014386  755400 ssh_runner.go:195] Run: cat /version.json
	I1108 23:36:06.014458  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:06.014738  755400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:36:06.014815  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:06.038893  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:06.041652  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:06.130446  755400 ssh_runner.go:195] Run: systemctl --version
	I1108 23:36:06.333029  755400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1108 23:36:06.339320  755400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1108 23:36:06.371706  755400 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1108 23:36:06.371831  755400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:36:06.406160  755400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1108 23:36:06.406183  755400 start.go:472] detecting cgroup driver to use...
	I1108 23:36:06.406233  755400 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1108 23:36:06.406299  755400 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:36:06.421594  755400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:36:06.436158  755400 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:36:06.436223  755400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:36:06.453062  755400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:36:06.469427  755400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:36:06.567636  755400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:36:06.672507  755400 docker.go:219] disabling docker service ...
	I1108 23:36:06.672572  755400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:36:06.695604  755400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:36:06.711136  755400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:36:06.812244  755400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:36:06.923214  755400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:36:06.936528  755400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:36:06.956913  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1108 23:36:06.969477  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:36:06.981955  755400 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:36:06.982033  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:36:06.994833  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:36:07.008944  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:36:07.021941  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:36:07.035253  755400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:36:07.047636  755400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:36:07.060561  755400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:36:07.070830  755400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:36:07.081342  755400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:36:07.175084  755400 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:36:07.329941  755400 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:36:07.330086  755400 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:36:07.335086  755400 start.go:540] Will wait 60s for crictl version
	I1108 23:36:07.335197  755400 ssh_runner.go:195] Run: which crictl
	I1108 23:36:07.339812  755400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:36:07.383590  755400 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1108 23:36:07.383741  755400 ssh_runner.go:195] Run: containerd --version
	I1108 23:36:07.413211  755400 ssh_runner.go:195] Run: containerd --version
	I1108 23:36:07.449330  755400 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1108 23:36:07.451311  755400 cli_runner.go:164] Run: docker network inspect addons-118967 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 23:36:07.471994  755400 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 23:36:07.476607  755400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 23:36:07.490169  755400 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:36:07.490244  755400 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:36:07.534924  755400 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:36:07.534951  755400 containerd.go:518] Images already preloaded, skipping extraction
	I1108 23:36:07.535013  755400 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:36:07.576796  755400 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:36:07.576823  755400 cache_images.go:84] Images are preloaded, skipping loading
	I1108 23:36:07.576882  755400 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:36:07.618547  755400 cni.go:84] Creating CNI manager for ""
	I1108 23:36:07.618573  755400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:36:07.618603  755400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:36:07.618627  755400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-118967 NodeName:addons-118967 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 23:36:07.618771  755400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-118967"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:36:07.618837  755400 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-118967 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-118967 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 23:36:07.618911  755400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 23:36:07.629990  755400 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:36:07.630137  755400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:36:07.641430  755400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1108 23:36:07.663769  755400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 23:36:07.685887  755400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1108 23:36:07.707485  755400 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 23:36:07.712037  755400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 23:36:07.725964  755400 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967 for IP: 192.168.49.2
	I1108 23:36:07.726000  755400 certs.go:190] acquiring lock for shared ca certs: {Name:mk3980826f8d7f07af38edd9b91f2a0fe0b143c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:07.726505  755400 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key
	I1108 23:36:07.915820  755400 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt ...
	I1108 23:36:07.915851  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt: {Name:mke474841e2e9691cf09754a2628e62996c692c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:07.916051  755400 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key ...
	I1108 23:36:07.916063  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key: {Name:mke3a237a6215edf6af495bc6861b9a7cfbdae7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:07.916826  755400 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key
	I1108 23:36:08.409299  755400 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.crt ...
	I1108 23:36:08.409337  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.crt: {Name:mk94a6dea53d860ce7c724ffdf34d16d1415f36b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:08.409577  755400 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key ...
	I1108 23:36:08.409592  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key: {Name:mkc9a7a786f6a8053249ba6bf47a92fc77256adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:08.409720  755400 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.key
	I1108 23:36:08.409738  755400 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt with IP's: []
	I1108 23:36:09.127567  755400 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt ...
	I1108 23:36:09.127598  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: {Name:mkedc46735e64709f00a903088d64d78af0895ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.127801  755400 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.key ...
	I1108 23:36:09.127815  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.key: {Name:mkbb1c868c719727478a29472245b3d2412ee83f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.127911  755400 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key.dd3b5fb2
	I1108 23:36:09.127932  755400 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1108 23:36:09.691342  755400 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt.dd3b5fb2 ...
	I1108 23:36:09.691372  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt.dd3b5fb2: {Name:mke544635b82ab666295d1aeec7532873a54936c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.691961  755400 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key.dd3b5fb2 ...
	I1108 23:36:09.691981  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key.dd3b5fb2: {Name:mk001a952c28aab6fe43028d4559f187c599418c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.692438  755400 certs.go:337] copying /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt
	I1108 23:36:09.692536  755400 certs.go:341] copying /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key
	I1108 23:36:09.692593  755400 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.key
	I1108 23:36:09.692614  755400 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.crt with IP's: []
	I1108 23:36:09.827999  755400 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.crt ...
	I1108 23:36:09.828025  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.crt: {Name:mk0a744169291bd31b0294e435034cd05d7c74f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.828870  755400 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.key ...
	I1108 23:36:09.828889  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.key: {Name:mk387fc2a2379c97451f36fff20a1cdf17823d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:09.830921  755400 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:36:09.831002  755400 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:36:09.831052  755400 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:36:09.831102  755400 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem (1679 bytes)
	I1108 23:36:09.831739  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:36:09.864277  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 23:36:09.892847  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:36:09.921858  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 23:36:09.951675  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:36:09.979776  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:36:10.020151  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:36:10.074740  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 23:36:10.118408  755400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:36:10.150358  755400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:36:10.173267  755400 ssh_runner.go:195] Run: openssl version
	I1108 23:36:10.180564  755400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:36:10.193366  755400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:36:10.198407  755400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:36:10.198476  755400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:36:10.207612  755400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:36:10.220530  755400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:36:10.225122  755400 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1108 23:36:10.225174  755400 kubeadm.go:404] StartCluster: {Name:addons-118967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-118967 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:36:10.225249  755400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:36:10.225319  755400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:36:10.267565  755400 cri.go:89] found id: ""
	I1108 23:36:10.267634  755400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:36:10.278399  755400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:36:10.289243  755400 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1108 23:36:10.289353  755400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:36:10.300460  755400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 23:36:10.300504  755400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 23:36:10.351732  755400 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 23:36:10.352052  755400 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 23:36:10.398594  755400 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1108 23:36:10.398733  755400 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1108 23:36:10.398793  755400 kubeadm.go:322] OS: Linux
	I1108 23:36:10.398860  755400 kubeadm.go:322] CGROUPS_CPU: enabled
	I1108 23:36:10.398932  755400 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1108 23:36:10.399004  755400 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1108 23:36:10.399089  755400 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1108 23:36:10.399172  755400 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1108 23:36:10.399251  755400 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1108 23:36:10.399333  755400 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1108 23:36:10.399404  755400 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1108 23:36:10.399480  755400 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1108 23:36:10.479063  755400 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 23:36:10.479250  755400 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 23:36:10.479398  755400 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 23:36:10.743357  755400 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 23:36:10.746345  755400 out.go:204]   - Generating certificates and keys ...
	I1108 23:36:10.746584  755400 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 23:36:10.746673  755400 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 23:36:11.163925  755400 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 23:36:11.551607  755400 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1108 23:36:11.769720  755400 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1108 23:36:12.697836  755400 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1108 23:36:13.502759  755400 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1108 23:36:13.503133  755400 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-118967 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 23:36:13.799179  755400 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1108 23:36:13.799664  755400 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-118967 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 23:36:14.008793  755400 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 23:36:14.275267  755400 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 23:36:14.439460  755400 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1108 23:36:14.439862  755400 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 23:36:15.820380  755400 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 23:36:16.042450  755400 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 23:36:16.248448  755400 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 23:36:16.500761  755400 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 23:36:16.501455  755400 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 23:36:16.504319  755400 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 23:36:16.506345  755400 out.go:204]   - Booting up control plane ...
	I1108 23:36:16.506489  755400 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 23:36:16.506565  755400 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 23:36:16.507176  755400 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 23:36:16.521559  755400 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 23:36:16.522112  755400 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 23:36:16.522434  755400 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 23:36:16.627945  755400 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 23:36:24.132595  755400 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504091 seconds
	I1108 23:36:24.132712  755400 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 23:36:24.148595  755400 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 23:36:24.681271  755400 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 23:36:24.681476  755400 kubeadm.go:322] [mark-control-plane] Marking the node addons-118967 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 23:36:25.193237  755400 kubeadm.go:322] [bootstrap-token] Using token: vq0djo.fve21hdaphqsqm4m
	I1108 23:36:25.195056  755400 out.go:204]   - Configuring RBAC rules ...
	I1108 23:36:25.195185  755400 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 23:36:25.201015  755400 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 23:36:25.209533  755400 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 23:36:25.213819  755400 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 23:36:25.219008  755400 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 23:36:25.222933  755400 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 23:36:25.236269  755400 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 23:36:25.480604  755400 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 23:36:25.618608  755400 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 23:36:25.619755  755400 kubeadm.go:322] 
	I1108 23:36:25.619836  755400 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 23:36:25.619843  755400 kubeadm.go:322] 
	I1108 23:36:25.619923  755400 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 23:36:25.619931  755400 kubeadm.go:322] 
	I1108 23:36:25.619971  755400 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 23:36:25.620048  755400 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 23:36:25.620096  755400 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 23:36:25.620113  755400 kubeadm.go:322] 
	I1108 23:36:25.620164  755400 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 23:36:25.620169  755400 kubeadm.go:322] 
	I1108 23:36:25.620214  755400 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 23:36:25.620218  755400 kubeadm.go:322] 
	I1108 23:36:25.620267  755400 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 23:36:25.620337  755400 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 23:36:25.620401  755400 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 23:36:25.620405  755400 kubeadm.go:322] 
	I1108 23:36:25.620483  755400 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 23:36:25.620555  755400 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 23:36:25.620559  755400 kubeadm.go:322] 
	I1108 23:36:25.620637  755400 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vq0djo.fve21hdaphqsqm4m \
	I1108 23:36:25.620736  755400 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab3ece522733f055757c65e666fc1044b61a233d0aa5f64decfdb326c72a9a27 \
	I1108 23:36:25.620756  755400 kubeadm.go:322] 	--control-plane 
	I1108 23:36:25.620760  755400 kubeadm.go:322] 
	I1108 23:36:25.620840  755400 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 23:36:25.620844  755400 kubeadm.go:322] 
	I1108 23:36:25.620926  755400 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vq0djo.fve21hdaphqsqm4m \
	I1108 23:36:25.621061  755400 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab3ece522733f055757c65e666fc1044b61a233d0aa5f64decfdb326c72a9a27 
	I1108 23:36:25.623621  755400 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1108 23:36:25.623759  755400 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 23:36:25.623807  755400 cni.go:84] Creating CNI manager for ""
	I1108 23:36:25.623816  755400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:36:25.625777  755400 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1108 23:36:25.627422  755400 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 23:36:25.637979  755400 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1108 23:36:25.638007  755400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1108 23:36:25.675938  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 23:36:26.617993  755400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:36:26.618088  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:26.618135  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e minikube.k8s.io/name=addons-118967 minikube.k8s.io/updated_at=2023_11_08T23_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:26.842279  755400 ops.go:34] apiserver oom_adj: -16
	I1108 23:36:26.842375  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:26.939438  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:27.532421  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:28.031945  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:28.532067  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:29.032904  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:29.532017  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:30.038027  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:30.532867  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:31.032059  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:31.532610  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:32.032089  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:32.532016  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:33.032654  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:33.532698  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:34.032196  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:34.532506  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:35.032240  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:35.532142  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:36.032034  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:36.532726  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:37.032788  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:37.532247  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:38.032102  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:38.532705  755400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:36:38.647601  755400 kubeadm.go:1081] duration metric: took 12.029576843s to wait for elevateKubeSystemPrivileges.
	I1108 23:36:38.647630  755400 kubeadm.go:406] StartCluster complete in 28.422461703s
	I1108 23:36:38.647649  755400 settings.go:142] acquiring lock: {Name:mk7d57467a4d6a0a6ec02c87b75e10e0424576f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:38.647776  755400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:36:38.648151  755400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/kubeconfig: {Name:mk63034fab281bd30b4004637fdc41282aa952da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:36:38.648778  755400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:36:38.649071  755400 config.go:182] Loaded profile config "addons-118967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:36:38.649230  755400 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1108 23:36:38.649333  755400 addons.go:69] Setting volumesnapshots=true in profile "addons-118967"
	I1108 23:36:38.649355  755400 addons.go:231] Setting addon volumesnapshots=true in "addons-118967"
	I1108 23:36:38.649425  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.649899  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.650392  755400 addons.go:69] Setting cloud-spanner=true in profile "addons-118967"
	I1108 23:36:38.650411  755400 addons.go:231] Setting addon cloud-spanner=true in "addons-118967"
	I1108 23:36:38.650445  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.650854  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.651142  755400 addons.go:69] Setting metrics-server=true in profile "addons-118967"
	I1108 23:36:38.651169  755400 addons.go:231] Setting addon metrics-server=true in "addons-118967"
	I1108 23:36:38.651201  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.651604  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.652018  755400 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-118967"
	I1108 23:36:38.652042  755400 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-118967"
	I1108 23:36:38.652094  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.652522  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.655392  755400 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-118967"
	I1108 23:36:38.655665  755400 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-118967"
	I1108 23:36:38.656323  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.656863  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.663356  755400 addons.go:69] Setting registry=true in profile "addons-118967"
	I1108 23:36:38.655821  755400 addons.go:69] Setting default-storageclass=true in profile "addons-118967"
	I1108 23:36:38.655832  755400 addons.go:69] Setting gcp-auth=true in profile "addons-118967"
	I1108 23:36:38.655845  755400 addons.go:69] Setting ingress=true in profile "addons-118967"
	I1108 23:36:38.655852  755400 addons.go:69] Setting ingress-dns=true in profile "addons-118967"
	I1108 23:36:38.655860  755400 addons.go:69] Setting inspektor-gadget=true in profile "addons-118967"
	I1108 23:36:38.669765  755400 addons.go:231] Setting addon inspektor-gadget=true in "addons-118967"
	I1108 23:36:38.669842  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.672323  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672348  755400 addons.go:69] Setting storage-provisioner=true in profile "addons-118967"
	I1108 23:36:38.678525  755400 addons.go:231] Setting addon storage-provisioner=true in "addons-118967"
	I1108 23:36:38.678580  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.679022  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672364  755400 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-118967"
	I1108 23:36:38.695214  755400 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-118967"
	I1108 23:36:38.695561  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672449  755400 addons.go:231] Setting addon registry=true in "addons-118967"
	I1108 23:36:38.720702  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.721180  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672462  755400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-118967"
	I1108 23:36:38.731764  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672481  755400 mustload.go:65] Loading cluster: addons-118967
	I1108 23:36:38.746374  755400 config.go:182] Loaded profile config "addons-118967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:36:38.746737  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672487  755400 addons.go:231] Setting addon ingress=true in "addons-118967"
	I1108 23:36:38.778974  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.779438  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.672494  755400 addons.go:231] Setting addon ingress-dns=true in "addons-118967"
	I1108 23:36:38.797586  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.798047  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.883262  755400 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1108 23:36:38.887136  755400 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 23:36:38.887202  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 23:36:38.887287  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:38.907337  755400 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	W1108 23:36:38.907922  755400 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-118967" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1108 23:36:38.910602  755400 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1108 23:36:38.910647  755400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:36:38.912426  755400 out.go:177] * Verifying Kubernetes components...
	I1108 23:36:38.910803  755400 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1108 23:36:38.910913  755400 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 23:36:38.910919  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 23:36:38.910923  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 23:36:38.916104  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 23:36:38.914992  755400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:36:38.915014  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 23:36:38.919619  755400 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1108 23:36:38.919682  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:38.920748  755400 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-118967"
	I1108 23:36:38.920809  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:38.921257  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:38.921775  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 23:36:38.921793  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 23:36:38.921889  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:38.935500  755400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:36:38.937647  755400 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:36:38.937676  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:36:38.937744  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:38.991892  755400 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1108 23:36:38.990083  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 23:36:38.990097  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 23:36:38.998246  755400 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1108 23:36:38.998425  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.002565  755400 out.go:177]   - Using image docker.io/registry:2.8.3
	I1108 23:36:39.002643  755400 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 23:36:39.002727  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1108 23:36:39.014436  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 23:36:39.012097  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.019900  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.042068  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 23:36:39.044072  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 23:36:39.045767  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 23:36:39.049322  755400 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 23:36:39.052731  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 23:36:39.052759  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 23:36:39.052831  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.065151  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.070747  755400 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1108 23:36:39.072387  755400 out.go:177]   - Using image docker.io/busybox:stable
	I1108 23:36:39.074352  755400 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 23:36:39.074372  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 23:36:39.074453  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.091609  755400 addons.go:231] Setting addon default-storageclass=true in "addons-118967"
	I1108 23:36:39.091651  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:39.092149  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:39.072718  755400 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 23:36:39.105342  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1108 23:36:39.105424  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.141523  755400 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1108 23:36:39.139501  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:39.139586  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.144460  755400 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 23:36:39.144479  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1108 23:36:39.144544  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.146328  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.207831  755400 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:36:39.207911  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:36:39.208051  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.211537  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.217034  755400 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1108 23:36:39.219524  755400 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1108 23:36:39.221281  755400 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1108 23:36:39.223203  755400 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 23:36:39.223226  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1108 23:36:39.223292  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:39.250885  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.254695  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.319392  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.332702  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.335062  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.341945  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:39.346662  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	W1108 23:36:39.351496  755400 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 23:36:39.351520  755400 retry.go:31] will retry after 315.481103ms: ssh: handshake failed: EOF
	I1108 23:36:39.411430  755400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 23:36:39.412296  755400 node_ready.go:35] waiting up to 6m0s for node "addons-118967" to be "Ready" ...
	I1108 23:36:39.415667  755400 node_ready.go:49] node "addons-118967" has status "Ready":"True"
	I1108 23:36:39.415742  755400 node_ready.go:38] duration metric: took 3.409134ms waiting for node "addons-118967" to be "Ready" ...
	I1108 23:36:39.415769  755400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:36:39.426204  755400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-85r22" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:39.757046  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 23:36:39.794516  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 23:36:39.794546  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 23:36:39.811008  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 23:36:39.934673  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 23:36:39.934701  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 23:36:39.944461  755400 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 23:36:39.944492  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 23:36:39.969216  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 23:36:39.990720  755400 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1108 23:36:39.990754  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1108 23:36:40.004480  755400 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 23:36:40.004516  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 23:36:40.096451  755400 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 23:36:40.096533  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 23:36:40.102562  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:36:40.168919  755400 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 23:36:40.169005  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 23:36:40.175561  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 23:36:40.204435  755400 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 23:36:40.204497  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 23:36:40.208849  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 23:36:40.208916  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 23:36:40.214521  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 23:36:40.242962  755400 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 23:36:40.243026  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 23:36:40.378998  755400 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1108 23:36:40.379069  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1108 23:36:40.379464  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 23:36:40.379508  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 23:36:40.483806  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 23:36:40.483874  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 23:36:40.493646  755400 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 23:36:40.493713  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 23:36:40.513830  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 23:36:40.598902  755400 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1108 23:36:40.598978  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1108 23:36:40.627987  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 23:36:40.768282  755400 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 23:36:40.768378  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 23:36:40.924390  755400 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 23:36:40.924462  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 23:36:40.961491  755400 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 23:36:40.961584  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 23:36:41.044167  755400 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1108 23:36:41.044233  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1108 23:36:41.082477  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 23:36:41.371628  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 23:36:41.371707  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 23:36:41.386233  755400 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1108 23:36:41.386298  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1108 23:36:41.412034  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 23:36:41.446484  755400 pod_ready.go:102] pod "coredns-5dd5756b68-85r22" in "kube-system" namespace has status "Ready":"False"
	I1108 23:36:41.631937  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 23:36:41.632018  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 23:36:41.689715  755400 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1108 23:36:41.689788  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1108 23:36:41.894507  755400 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1108 23:36:41.894584  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1108 23:36:41.947040  755400 pod_ready.go:92] pod "coredns-5dd5756b68-85r22" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:41.947073  755400 pod_ready.go:81] duration metric: took 2.520795834s waiting for pod "coredns-5dd5756b68-85r22" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.947087  755400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm9qn" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.957110  755400 pod_ready.go:92] pod "coredns-5dd5756b68-fm9qn" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:41.957144  755400 pod_ready.go:81] duration metric: took 10.011689ms waiting for pod "coredns-5dd5756b68-fm9qn" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.957157  755400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.964021  755400 pod_ready.go:92] pod "etcd-addons-118967" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:41.964048  755400 pod_ready.go:81] duration metric: took 6.8834ms waiting for pod "etcd-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.964063  755400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.970807  755400 pod_ready.go:92] pod "kube-apiserver-addons-118967" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:41.970832  755400 pod_ready.go:81] duration metric: took 6.731843ms waiting for pod "kube-apiserver-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.970844  755400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:41.993579  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 23:36:41.993650  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 23:36:42.217257  755400 pod_ready.go:92] pod "kube-controller-manager-addons-118967" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:42.217292  755400 pod_ready.go:81] duration metric: took 246.44026ms waiting for pod "kube-controller-manager-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:42.217307  755400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9stl" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:42.250871  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1108 23:36:42.317772  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 23:36:42.317841  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 23:36:42.604851  755400 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 23:36:42.604873  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 23:36:42.616162  755400 pod_ready.go:92] pod "kube-proxy-s9stl" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:42.616183  755400 pod_ready.go:81] duration metric: took 398.86812ms waiting for pod "kube-proxy-s9stl" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:42.616195  755400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:42.727527  755400 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.316053086s)
	I1108 23:36:42.727597  755400 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 23:36:42.727649  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.970574942s)
	I1108 23:36:42.819305  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 23:36:43.018090  755400 pod_ready.go:92] pod "kube-scheduler-addons-118967" in "kube-system" namespace has status "Ready":"True"
	I1108 23:36:43.018165  755400 pod_ready.go:81] duration metric: took 401.960085ms waiting for pod "kube-scheduler-addons-118967" in "kube-system" namespace to be "Ready" ...
	I1108 23:36:43.018193  755400 pod_ready.go:38] duration metric: took 3.602396503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:36:43.018223  755400 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:36:43.018301  755400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:36:43.047824  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.236776484s)
	I1108 23:36:43.047921  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.078671341s)
	I1108 23:36:43.805616  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.702958064s)
	I1108 23:36:44.788033  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.612388842s)
	I1108 23:36:45.954073  755400 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 23:36:45.954158  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:45.985043  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:46.464757  755400 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 23:36:46.618618  755400 addons.go:231] Setting addon gcp-auth=true in "addons-118967"
	I1108 23:36:46.618667  755400 host.go:66] Checking if "addons-118967" exists ...
	I1108 23:36:46.619122  755400 cli_runner.go:164] Run: docker container inspect addons-118967 --format={{.State.Status}}
	I1108 23:36:46.652208  755400 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 23:36:46.652265  755400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-118967
	I1108 23:36:46.681930  755400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/addons-118967/id_rsa Username:docker}
	I1108 23:36:47.164074  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.949475918s)
	I1108 23:36:47.164152  755400 addons.go:467] Verifying addon ingress=true in "addons-118967"
	I1108 23:36:47.166168  755400 out.go:177] * Verifying ingress addon...
	I1108 23:36:47.164347  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.650449498s)
	I1108 23:36:47.164393  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.536333509s)
	I1108 23:36:47.164544  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.081996099s)
	I1108 23:36:47.164623  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.752521387s)
	I1108 23:36:47.164697  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.91374588s)
	I1108 23:36:47.167838  755400 addons.go:467] Verifying addon registry=true in "addons-118967"
	I1108 23:36:47.170301  755400 out.go:177] * Verifying registry addon...
	I1108 23:36:47.168659  755400 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1108 23:36:47.168691  755400 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 23:36:47.168713  755400 addons.go:467] Verifying addon metrics-server=true in "addons-118967"
	I1108 23:36:47.172542  755400 retry.go:31] will retry after 267.269967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 23:36:47.173102  755400 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 23:36:47.183198  755400 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 23:36:47.183222  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:47.187958  755400 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 23:36:47.188031  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:47.196743  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:47.201385  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:47.440947  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 23:36:47.702080  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:47.707328  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:48.213658  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:48.224977  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:48.720632  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:48.725728  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:48.836055  755400 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.817705032s)
	I1108 23:36:48.836171  755400 api_server.go:72] duration metric: took 9.925468896s to wait for apiserver process to appear ...
	I1108 23:36:48.836207  755400 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:36:48.836274  755400 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 23:36:48.836622  755400 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.184392106s)
	I1108 23:36:48.838352  755400 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1108 23:36:48.840105  755400 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1108 23:36:48.837675  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.018282061s)
	I1108 23:36:48.842229  755400 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-118967"
	I1108 23:36:48.844164  755400 out.go:177] * Verifying csi-hostpath-driver addon...
	I1108 23:36:48.842439  755400 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 23:36:48.846590  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 23:36:48.847453  755400 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 23:36:48.853309  755400 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 23:36:48.861833  755400 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 23:36:48.861854  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:48.862082  755400 api_server.go:141] control plane version: v1.28.3
	I1108 23:36:48.862127  755400 api_server.go:131] duration metric: took 25.863787ms to wait for apiserver health ...
	I1108 23:36:48.862157  755400 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:36:48.869247  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:48.886406  755400 system_pods.go:59] 19 kube-system pods found
	I1108 23:36:48.886486  755400 system_pods.go:61] "coredns-5dd5756b68-85r22" [34c303c6-3a70-4b24-8cb4-a7db2e16376f] Running
	I1108 23:36:48.886508  755400 system_pods.go:61] "coredns-5dd5756b68-fm9qn" [f7bf3f81-512d-4167-a993-eab07ffbf222] Running
	I1108 23:36:48.886536  755400 system_pods.go:61] "csi-hostpath-attacher-0" [2bb21d19-16b9-495a-8da4-5f802733be21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 23:36:48.886573  755400 system_pods.go:61] "csi-hostpath-resizer-0" [76803008-61d1-4fe2-ab55-9213bbe94848] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 23:36:48.886601  755400 system_pods.go:61] "csi-hostpathplugin-kpwxl" [eeb485fa-3fea-4f7f-9c58-840d7ba31742] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 23:36:48.886627  755400 system_pods.go:61] "etcd-addons-118967" [93d8ff9a-8d8d-4e9e-9cfc-015dde6f93c2] Running
	I1108 23:36:48.886649  755400 system_pods.go:61] "kindnet-57lhb" [9c737f92-9f2c-478c-be0e-41a723d8cf6f] Running
	I1108 23:36:48.886679  755400 system_pods.go:61] "kube-apiserver-addons-118967" [cd69e562-db92-4fed-b1e7-dfc59fde9f7f] Running
	I1108 23:36:48.886705  755400 system_pods.go:61] "kube-controller-manager-addons-118967" [269285d5-9f6d-4f07-8523-e5f2cfa79039] Running
	I1108 23:36:48.886732  755400 system_pods.go:61] "kube-ingress-dns-minikube" [b6d054c9-6481-4bc7-95e4-3a3dbeba5047] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 23:36:48.886758  755400 system_pods.go:61] "kube-proxy-s9stl" [a3d38e17-d9cb-4ba5-9ef1-d28228055660] Running
	I1108 23:36:48.886792  755400 system_pods.go:61] "kube-scheduler-addons-118967" [dbda07dd-d5a8-49e5-bf43-397563a11bc4] Running
	I1108 23:36:48.886823  755400 system_pods.go:61] "metrics-server-7c66d45ddc-c9kmx" [b5d47038-e0ba-4b1c-bdff-2a99a0a81148] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 23:36:48.886850  755400 system_pods.go:61] "nvidia-device-plugin-daemonset-46zpk" [87c2969a-8026-42fe-86c6-e669b75ebd9f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 23:36:48.886879  755400 system_pods.go:61] "registry-fnqg8" [b764ff1f-e51d-4072-9a1e-bf302ffc887e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 23:36:48.886919  755400 system_pods.go:61] "registry-proxy-znflj" [7a21a579-a336-4282-b18c-285134164e96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 23:36:48.886946  755400 system_pods.go:61] "snapshot-controller-58dbcc7b99-4h6xr" [29c1f77b-d3c7-4be4-8937-e7ee39dd06bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 23:36:48.886974  755400 system_pods.go:61] "snapshot-controller-58dbcc7b99-7tp2m" [0c52a492-de39-4fea-a2d8-65d4f1a1e686] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 23:36:48.887007  755400 system_pods.go:61] "storage-provisioner" [d0fcbdb6-2c14-4d11-aeab-c316ea6748ad] Running
	I1108 23:36:48.887029  755400 system_pods.go:74] duration metric: took 24.788617ms to wait for pod list to return data ...
	I1108 23:36:48.887052  755400 default_sa.go:34] waiting for default service account to be created ...
	I1108 23:36:48.889772  755400 default_sa.go:45] found service account: "default"
	I1108 23:36:48.889832  755400 default_sa.go:55] duration metric: took 2.759717ms for default service account to be created ...
	I1108 23:36:48.889857  755400 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 23:36:48.900096  755400 system_pods.go:86] 19 kube-system pods found
	I1108 23:36:48.900184  755400 system_pods.go:89] "coredns-5dd5756b68-85r22" [34c303c6-3a70-4b24-8cb4-a7db2e16376f] Running
	I1108 23:36:48.900206  755400 system_pods.go:89] "coredns-5dd5756b68-fm9qn" [f7bf3f81-512d-4167-a993-eab07ffbf222] Running
	I1108 23:36:48.900248  755400 system_pods.go:89] "csi-hostpath-attacher-0" [2bb21d19-16b9-495a-8da4-5f802733be21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 23:36:48.900275  755400 system_pods.go:89] "csi-hostpath-resizer-0" [76803008-61d1-4fe2-ab55-9213bbe94848] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 23:36:48.900305  755400 system_pods.go:89] "csi-hostpathplugin-kpwxl" [eeb485fa-3fea-4f7f-9c58-840d7ba31742] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 23:36:48.900329  755400 system_pods.go:89] "etcd-addons-118967" [93d8ff9a-8d8d-4e9e-9cfc-015dde6f93c2] Running
	I1108 23:36:48.900412  755400 system_pods.go:89] "kindnet-57lhb" [9c737f92-9f2c-478c-be0e-41a723d8cf6f] Running
	I1108 23:36:48.900439  755400 system_pods.go:89] "kube-apiserver-addons-118967" [cd69e562-db92-4fed-b1e7-dfc59fde9f7f] Running
	I1108 23:36:48.900462  755400 system_pods.go:89] "kube-controller-manager-addons-118967" [269285d5-9f6d-4f07-8523-e5f2cfa79039] Running
	I1108 23:36:48.900488  755400 system_pods.go:89] "kube-ingress-dns-minikube" [b6d054c9-6481-4bc7-95e4-3a3dbeba5047] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 23:36:48.900521  755400 system_pods.go:89] "kube-proxy-s9stl" [a3d38e17-d9cb-4ba5-9ef1-d28228055660] Running
	I1108 23:36:48.900547  755400 system_pods.go:89] "kube-scheduler-addons-118967" [dbda07dd-d5a8-49e5-bf43-397563a11bc4] Running
	I1108 23:36:48.900573  755400 system_pods.go:89] "metrics-server-7c66d45ddc-c9kmx" [b5d47038-e0ba-4b1c-bdff-2a99a0a81148] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 23:36:48.900596  755400 system_pods.go:89] "nvidia-device-plugin-daemonset-46zpk" [87c2969a-8026-42fe-86c6-e669b75ebd9f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 23:36:48.900628  755400 system_pods.go:89] "registry-fnqg8" [b764ff1f-e51d-4072-9a1e-bf302ffc887e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 23:36:48.900658  755400 system_pods.go:89] "registry-proxy-znflj" [7a21a579-a336-4282-b18c-285134164e96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 23:36:48.900687  755400 system_pods.go:89] "snapshot-controller-58dbcc7b99-4h6xr" [29c1f77b-d3c7-4be4-8937-e7ee39dd06bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 23:36:48.900712  755400 system_pods.go:89] "snapshot-controller-58dbcc7b99-7tp2m" [0c52a492-de39-4fea-a2d8-65d4f1a1e686] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 23:36:48.900742  755400 system_pods.go:89] "storage-provisioner" [d0fcbdb6-2c14-4d11-aeab-c316ea6748ad] Running
	I1108 23:36:48.900768  755400 system_pods.go:126] duration metric: took 10.891997ms to wait for k8s-apps to be running ...
	I1108 23:36:48.900791  755400 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 23:36:48.900876  755400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:36:48.930289  755400 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 23:36:48.930356  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 23:36:49.002246  755400 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 23:36:49.002313  755400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1108 23:36:49.087408  755400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 23:36:49.202235  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:49.206800  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:49.387402  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:49.471952  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.030917851s)
	I1108 23:36:49.472048  755400 system_svc.go:56] duration metric: took 571.25507ms WaitForService to wait for kubelet.
	I1108 23:36:49.472074  755400 kubeadm.go:581] duration metric: took 10.56137391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 23:36:49.472125  755400 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:36:49.477731  755400 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 23:36:49.477802  755400 node_conditions.go:123] node cpu capacity is 2
	I1108 23:36:49.477829  755400 node_conditions.go:105] duration metric: took 5.684315ms to run NodePressure ...
	I1108 23:36:49.477854  755400 start.go:228] waiting for startup goroutines ...
	I1108 23:36:49.701963  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:49.707212  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:49.875277  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:50.133509  755400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.046020531s)
	I1108 23:36:50.135883  755400 addons.go:467] Verifying addon gcp-auth=true in "addons-118967"
	I1108 23:36:50.139667  755400 out.go:177] * Verifying gcp-auth addon...
	I1108 23:36:50.142459  755400 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 23:36:50.151427  755400 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 23:36:50.151457  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:50.160476  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:50.202469  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:50.207971  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:50.377678  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:50.664149  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:50.702022  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:50.706167  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:50.875749  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:51.164255  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:51.201597  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:51.205923  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:51.375012  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:51.664467  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:51.701222  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:51.707183  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:51.875436  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:52.165576  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:52.201549  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:52.206384  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:52.375089  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:52.664935  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:52.701830  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:52.706596  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:52.875495  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:53.167275  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:53.201651  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:53.206682  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:53.376177  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:53.664799  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:53.701839  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:53.706482  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:53.876157  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:54.166922  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:54.201963  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:54.206493  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:54.381456  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:54.665185  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:54.701115  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:54.706569  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:54.875579  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:55.165161  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:55.202427  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:55.207017  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:55.375948  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:55.665639  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:55.701825  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:55.707261  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:55.875312  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:56.165172  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:56.203405  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:56.208695  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:56.376596  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:56.664427  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:56.701991  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:56.706685  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:56.875479  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:57.164464  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:57.201927  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:57.206935  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:57.376815  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:57.664594  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:57.702574  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:57.707297  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:57.875710  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:58.164893  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:58.201888  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:58.206512  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:58.375785  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:58.664916  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:58.702330  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:58.706860  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:58.875700  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:59.164429  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:59.202069  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:59.206565  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:59.386384  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:36:59.665405  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:36:59.702024  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:36:59.707807  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:36:59.875900  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:00.166749  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:00.205166  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:00.210838  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:00.377559  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:00.666132  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:00.702501  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:00.707269  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:00.875547  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:01.167318  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:01.202235  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:01.207559  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:01.376733  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:01.664528  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:01.702085  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:01.706746  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:01.875915  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:02.164413  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:02.202193  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:02.207817  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:02.375812  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:02.664669  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:02.701976  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:02.706204  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:02.875174  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:03.164843  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:03.201603  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:03.206236  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:03.375303  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:03.664900  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:03.701665  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:03.706089  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:03.874663  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:04.164387  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:04.201681  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:04.205799  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:04.375889  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:04.665218  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:04.702308  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:04.707243  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:04.876679  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:05.164454  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:05.203389  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:05.209962  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:05.375742  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:05.664102  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:05.702016  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:05.706596  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:05.875789  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:06.164973  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:06.201502  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:06.205762  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:06.376499  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:06.663972  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:06.701973  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:06.706529  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:06.875869  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:07.164161  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:07.202025  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:07.206357  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:07.375279  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:07.664471  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:07.702397  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:07.707038  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:07.876274  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:08.163915  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:08.201405  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:08.209336  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:08.376660  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:08.665470  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:08.703745  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:08.709558  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:08.877965  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:09.164821  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:09.202272  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:09.214938  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:09.382462  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:09.664917  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:09.701887  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:09.706572  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:09.876010  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:10.165823  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:10.201698  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:10.206724  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:10.378046  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:10.665131  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:10.702016  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:10.706616  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:10.879105  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:11.165231  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:11.201708  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:11.206252  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:11.376138  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:11.664829  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:11.704319  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:11.710608  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:11.875584  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:12.163993  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:12.201654  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:12.206975  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:12.376672  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:12.664836  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:12.701029  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:12.707733  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:12.878433  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:13.172757  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:13.203173  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:13.206680  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:13.375405  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:13.664914  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:13.709887  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:13.711108  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:13.875362  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:14.164515  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:14.202171  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:14.208165  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:14.377397  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:14.665867  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:14.701656  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:14.707069  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:14.874917  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:15.165496  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:15.203350  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:15.207536  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:15.376285  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:15.665353  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:15.702297  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:15.707264  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:15.876330  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:16.170421  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:16.206651  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:16.216253  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:16.376196  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:16.665272  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:16.702737  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:16.706438  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:16.877207  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:17.164355  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:17.202130  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:17.207852  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:17.377038  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:17.665113  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:17.701601  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:17.706207  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 23:37:17.874835  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:18.164425  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:18.203606  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:18.207767  755400 kapi.go:107] duration metric: took 31.034663483s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 23:37:18.376816  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:18.664755  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:18.701486  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:18.875045  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:19.164590  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:19.201936  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:19.375553  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:19.664801  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:19.704879  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:19.878110  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:20.165557  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:20.202089  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:20.376106  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:20.664185  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:20.701944  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:20.875404  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:21.164243  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:21.201719  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:21.375599  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:21.664499  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:21.701968  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:21.875611  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:22.164555  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:22.202668  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:22.374795  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:22.664334  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:22.702147  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:22.875741  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:23.181133  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:23.203591  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:23.379781  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:23.665333  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:23.703056  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:23.875985  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:24.165362  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:24.202125  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:24.375193  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:24.664710  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:24.702106  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:24.875826  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:25.164326  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:25.201730  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:25.375295  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:25.664248  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:25.701678  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:25.876208  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:26.164508  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:26.204451  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:26.375247  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:26.665658  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:26.701864  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:26.876071  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:27.164998  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:27.201494  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:27.375202  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:27.665915  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:27.702602  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:27.876530  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:28.164436  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:28.201336  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:28.377649  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:28.664665  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:28.701518  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:28.875041  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:29.164753  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:29.204640  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:29.378635  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:29.664211  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:29.701154  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:29.875403  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:30.165047  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:30.211698  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:30.375777  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:30.667164  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:30.701678  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:30.878609  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:31.164665  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:31.201363  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:31.375787  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:31.665279  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:31.701176  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:31.891769  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:32.164349  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:32.201824  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:32.375554  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:32.664716  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:32.702048  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:32.875805  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:33.164523  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:33.202415  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:33.375386  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:33.665008  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:33.714736  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:33.877653  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:34.164454  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:34.201913  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:34.375520  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:34.664368  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:34.701199  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:34.874787  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:35.164397  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:35.201736  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:35.375185  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:35.664392  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:35.701998  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:35.878596  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 23:37:36.164607  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:36.202345  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:36.375196  755400 kapi.go:107] duration metric: took 47.527744031s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 23:37:36.664658  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:36.701656  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:37.164906  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:37.201249  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:37.664274  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:37.701716  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:38.165082  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:38.201660  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:38.664719  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:38.701499  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:39.164815  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:39.201545  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:39.664504  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:39.701321  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:40.164252  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:40.201335  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:40.664690  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:40.702090  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:41.164631  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:41.202415  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:41.664266  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:41.701524  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:42.164788  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:42.203801  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:42.665090  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:42.707144  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:43.164471  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:43.201848  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:43.664701  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:43.701313  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:44.164040  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:44.201549  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:44.664328  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:44.701624  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:45.170055  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:45.202147  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:45.665063  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:45.702003  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:46.164930  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:46.201804  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:46.665018  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:46.701860  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:47.164924  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:47.202040  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:47.664812  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:47.701552  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:48.164145  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:48.201102  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:48.664287  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:48.701889  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:49.164824  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:49.201237  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:49.664387  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:49.702075  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:50.165370  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:50.201653  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:50.664509  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:50.702227  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:51.165341  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:51.202739  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:51.664726  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:51.701669  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:52.164129  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:52.202047  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:52.663956  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:52.701482  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:53.164294  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:53.201350  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:53.664913  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:53.702580  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:54.164711  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:54.201315  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:54.665102  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:54.703281  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:55.164765  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:55.203311  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:55.665105  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:55.702756  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:56.164749  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:56.202339  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:56.666947  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:56.704824  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:57.165260  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:57.203846  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:57.664989  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:57.702512  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:58.164955  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:58.202276  755400 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 23:37:58.664854  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:58.703745  755400 kapi.go:107] duration metric: took 1m11.535084236s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 23:37:59.164682  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:37:59.668041  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:38:00.175091  755400 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 23:38:00.665626  755400 kapi.go:107] duration metric: took 1m10.523162738s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 23:38:00.667392  755400 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-118967 cluster.
	I1108 23:38:00.668978  755400 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 23:38:00.670702  755400 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 23:38:00.672417  755400 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, ingress-dns, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1108 23:38:00.673878  755400 addons.go:502] enable addons completed in 1m22.024644398s: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner storage-provisioner-rancher inspektor-gadget ingress-dns metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1108 23:38:00.673929  755400 start.go:233] waiting for cluster config update ...
	I1108 23:38:00.673949  755400 start.go:242] writing updated cluster config ...
	I1108 23:38:00.674268  755400 ssh_runner.go:195] Run: rm -f paused
	I1108 23:38:00.799662  755400 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 23:38:00.804100  755400 out.go:177] * Done! kubectl is now configured to use "addons-118967" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b5c258d081e0f       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app           2                   4eae5c0260fa0       hello-world-app-5d77478584-m5k67
	9e3316b9f3e91       aae348c9fbd40       33 seconds ago       Running             nginx                     0                   8f3d32388dfd2       nginx
	0ba8ae2bdce30       14b04e7ab95a8       57 seconds ago       Running             headlamp                  0                   4706af824e40e       headlamp-777fd4b855-vnjrn
	957f30f0af6b6       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   388c11cf7fd13       gcp-auth-d4c87556c-vxpw7
	951cece4c522d       af594c6a879f2       2 minutes ago        Exited              patch                     0                   7b092ea706b31       ingress-nginx-admission-patch-54bbb
	81a39b990d925       af594c6a879f2       2 minutes ago        Exited              create                    0                   5fdabec60bd5f       ingress-nginx-admission-create-djs7h
	768ff84fab9f3       ba04bb24b9575       3 minutes ago        Running             storage-provisioner       0                   6109aa3b59372       storage-provisioner
	ef07a9e94c34d       97e04611ad434       3 minutes ago        Running             coredns                   0                   56e5d18722650       coredns-5dd5756b68-fm9qn
	55a82049a394d       97e04611ad434       3 minutes ago        Running             coredns                   0                   e57d5cbb7a505       coredns-5dd5756b68-85r22
	d2c6b7564e349       04b4eaa3d3db8       3 minutes ago        Running             kindnet-cni               0                   813558e5b930b       kindnet-57lhb
	1d81123970592       a5dd5cdd6d3ef       3 minutes ago        Running             kube-proxy                0                   7173c37eb4b2b       kube-proxy-s9stl
	bffb16a5a8abe       9cdd6470f48c8       3 minutes ago        Running             etcd                      0                   0e71134ac0e0e       etcd-addons-118967
	e8a25681fea82       42a4e73724daa       3 minutes ago        Running             kube-scheduler            0                   29c301b8ff9bb       kube-scheduler-addons-118967
	f3393685f2d9d       8276439b4f237       3 minutes ago        Running             kube-controller-manager   0                   ee0952db35b74       kube-controller-manager-addons-118967
	ee136bcbc0a91       537e9a59ee2fd       3 minutes ago        Running             kube-apiserver            0                   894c55f1f179d       kube-apiserver-addons-118967
	
	* 
	* ==> containerd <==
	* Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.869559341Z" level=info msg="TearDown network for sandbox \"6ca11ad1c9066ec3fef6e57e3fe773823e5d22b5ca74381e71657da6286077ba\" successfully"
	Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.869682918Z" level=info msg="StopPodSandbox for \"6ca11ad1c9066ec3fef6e57e3fe773823e5d22b5ca74381e71657da6286077ba\" returns successfully"
	Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.903006988Z" level=info msg="RemoveContainer for \"c1e1930d989d10ca8a46c3dfccc16919ce0552c8ad3fb2648e058c47bd1319f0\""
	Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.912435658Z" level=info msg="RemoveContainer for \"c1e1930d989d10ca8a46c3dfccc16919ce0552c8ad3fb2648e058c47bd1319f0\" returns successfully"
	Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.916636469Z" level=info msg="RemoveContainer for \"e75190a8c49970e06e93a818a1de443a8aef30836f8213f2b16c82f589e7a947\""
	Nov 08 23:39:42 addons-118967 containerd[745]: time="2023-11-08T23:39:42.924275855Z" level=info msg="RemoveContainer for \"e75190a8c49970e06e93a818a1de443a8aef30836f8213f2b16c82f589e7a947\" returns successfully"
	Nov 08 23:39:44 addons-118967 containerd[745]: time="2023-11-08T23:39:44.943571085Z" level=info msg="StopContainer for \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" with timeout 2 (s)"
	Nov 08 23:39:44 addons-118967 containerd[745]: time="2023-11-08T23:39:44.943967292Z" level=info msg="Stop container \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" with signal terminated"
	Nov 08 23:39:46 addons-118967 containerd[745]: time="2023-11-08T23:39:46.951964022Z" level=info msg="Kill container \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\""
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.049031850Z" level=info msg="shim disconnected" id=b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.049094521Z" level=warning msg="cleaning up after shim disconnected" id=b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac namespace=k8s.io
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.049106681Z" level=info msg="cleaning up dead shim"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.059402991Z" level=warning msg="cleanup warnings time=\"2023-11-08T23:39:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11102 runtime=io.containerd.runc.v2\n"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.062667346Z" level=info msg="StopContainer for \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" returns successfully"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.063271266Z" level=info msg="StopPodSandbox for \"a89bc62b5a86303ab6247b8f917e0c1708bf46efc232147cd957013948ce1d98\""
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.063347032Z" level=info msg="Container to stop \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.098106296Z" level=info msg="shim disconnected" id=a89bc62b5a86303ab6247b8f917e0c1708bf46efc232147cd957013948ce1d98
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.098178320Z" level=warning msg="cleaning up after shim disconnected" id=a89bc62b5a86303ab6247b8f917e0c1708bf46efc232147cd957013948ce1d98 namespace=k8s.io
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.098190373Z" level=info msg="cleaning up dead shim"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.110934616Z" level=warning msg="cleanup warnings time=\"2023-11-08T23:39:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11133 runtime=io.containerd.runc.v2\n"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.174385895Z" level=info msg="TearDown network for sandbox \"a89bc62b5a86303ab6247b8f917e0c1708bf46efc232147cd957013948ce1d98\" successfully"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.174576409Z" level=info msg="StopPodSandbox for \"a89bc62b5a86303ab6247b8f917e0c1708bf46efc232147cd957013948ce1d98\" returns successfully"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.922420991Z" level=info msg="RemoveContainer for \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\""
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.927276306Z" level=info msg="RemoveContainer for \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" returns successfully"
	Nov 08 23:39:47 addons-118967 containerd[745]: time="2023-11-08T23:39:47.927954990Z" level=error msg="ContainerStatus for \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\": not found"
	
	* 
	* ==> coredns [55a82049a394d1e287362aa0cf9ec52e0ba10f2ec402d02c68fcf6e79175f296] <==
	* [INFO] 10.244.0.19:40499 - 24787 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131446s
	[INFO] 10.244.0.19:40499 - 25242 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104943s
	[INFO] 10.244.0.19:40499 - 30311 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000103557s
	[INFO] 10.244.0.19:40499 - 63948 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098608s
	[INFO] 10.244.0.19:40499 - 54698 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002038654s
	[INFO] 10.244.0.19:40499 - 65009 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001086682s
	[INFO] 10.244.0.19:40499 - 65374 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00015766s
	[INFO] 10.244.0.19:47463 - 38812 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000150957s
	[INFO] 10.244.0.19:47463 - 29967 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066051s
	[INFO] 10.244.0.19:49678 - 44309 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000133743s
	[INFO] 10.244.0.19:47463 - 4631 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049879s
	[INFO] 10.244.0.19:49678 - 1089 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000192327s
	[INFO] 10.244.0.19:47463 - 54649 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00020356s
	[INFO] 10.244.0.19:49678 - 37104 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000347355s
	[INFO] 10.244.0.19:47463 - 6927 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065559s
	[INFO] 10.244.0.19:49678 - 35298 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066453s
	[INFO] 10.244.0.19:49678 - 17003 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005408s
	[INFO] 10.244.0.19:49678 - 56518 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081337s
	[INFO] 10.244.0.19:47463 - 29452 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058485s
	[INFO] 10.244.0.19:49678 - 29077 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001391723s
	[INFO] 10.244.0.19:47463 - 29037 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001575657s
	[INFO] 10.244.0.19:49678 - 61113 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000900566s
	[INFO] 10.244.0.19:49678 - 29361 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079844s
	[INFO] 10.244.0.19:47463 - 33407 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000869255s
	[INFO] 10.244.0.19:47463 - 10813 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058806s
	
	* 
	* ==> coredns [ef07a9e94c34d0d3ec0b175cbbd012c83c577f4659f0409329894180b4537c11] <==
	* linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 10.244.0.6:43172 - 54141 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002055597s
	[INFO] 10.244.0.6:43172 - 34683 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002048229s
	[INFO] 10.244.0.6:45361 - 32202 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182744s
	[INFO] 10.244.0.6:45361 - 44232 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000589618s
	[INFO] 10.244.0.6:59723 - 18111 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001276195s
	[INFO] 10.244.0.6:59723 - 49585 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001639287s
	[INFO] 10.244.0.20:43464 - 55811 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001480339s
	[INFO] 10.244.0.20:42180 - 48142 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164848s
	[INFO] 10.244.0.20:41578 - 16062 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141776s
	[INFO] 10.244.0.20:42796 - 32909 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002851368s
	[INFO] 10.244.0.22:37161 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000177608s
	[INFO] 10.244.0.22:45181 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188726s
	[INFO] 10.244.0.19:41156 - 7266 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000179806s
	[INFO] 10.244.0.19:41156 - 20683 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000149645s
	[INFO] 10.244.0.19:41156 - 3284 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131635s
	[INFO] 10.244.0.19:41156 - 58181 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000090273s
	[INFO] 10.244.0.19:41156 - 44875 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000103376s
	[INFO] 10.244.0.19:41156 - 42784 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000129345s
	[INFO] 10.244.0.19:41156 - 6028 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006398078s
	[INFO] 10.244.0.19:41156 - 52050 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002025673s
	[INFO] 10.244.0.19:41156 - 37618 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000196931s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-118967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-118967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e
	                    minikube.k8s.io/name=addons-118967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T23_36_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-118967
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 23:36:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-118967
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 23:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 23:39:29 +0000   Wed, 08 Nov 2023 23:36:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 23:39:29 +0000   Wed, 08 Nov 2023 23:36:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 23:39:29 +0000   Wed, 08 Nov 2023 23:36:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 23:39:29 +0000   Wed, 08 Nov 2023 23:36:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-118967
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 251cda9c0e314613903ab9d288839716
	  System UUID:                cafd3284-4b89-4bcf-a36f-5707a85fe41b
	  Boot ID:                    34e87349-8f26-419b-8ec9-ff846a1986b6
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-m5k67         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-vxpw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  headlamp                    headlamp-777fd4b855-vnjrn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 coredns-5dd5756b68-85r22                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m14s
	  kube-system                 coredns-5dd5756b68-fm9qn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m14s
	  kube-system                 etcd-addons-118967                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-57lhb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m15s
	  kube-system                 kube-apiserver-addons-118967             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-controller-manager-addons-118967    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-s9stl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 kube-scheduler-addons-118967             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m12s  kube-proxy       
	  Normal  Starting                 3m27s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m27s  kubelet          Node addons-118967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s  kubelet          Node addons-118967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s  kubelet          Node addons-118967 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m27s  kubelet          Node addons-118967 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m17s  kubelet          Node addons-118967 status is now: NodeReady
	  Normal  RegisteredNode           3m15s  node-controller  Node addons-118967 event: Registered Node addons-118967 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001076] FS-Cache: O-key=[8] '3e3c5c0100000000'
	[  +0.000762] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001025] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=00000000141d2f98
	[  +0.001092] FS-Cache: N-key=[8] '3e3c5c0100000000'
	[  +0.005644] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=00000000e15b38e7
	[  +0.001132] FS-Cache: O-key=[8] '3e3c5c0100000000'
	[  +0.001027] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=0000000028725ca5
	[  +0.001137] FS-Cache: N-key=[8] '3e3c5c0100000000'
	[  +2.604199] FS-Cache: Duplicate cookie detected
	[  +0.000703] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001120] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=000000001a1e0a53
	[  +0.001138] FS-Cache: O-key=[8] '3d3c5c0100000000'
	[  +0.000761] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=00000000b5bb4b51
	[  +0.001267] FS-Cache: N-key=[8] '3d3c5c0100000000'
	[  +0.495540] FS-Cache: Duplicate cookie detected
	[  +0.000871] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001422] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=00000000558770d2
	[  +0.001162] FS-Cache: O-key=[8] '433c5c0100000000'
	[  +0.000742] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=00000000141d2f98
	[  +0.001156] FS-Cache: N-key=[8] '433c5c0100000000'
	
	* 
	* ==> etcd [bffb16a5a8abed05af74011ab44c07932003d0c076ff7572b0b0252b69b713a8] <==
	* {"level":"info","ts":"2023-11-08T23:36:18.752898Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:36:18.765527Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:36:18.765553Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:36:18.753894Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-08T23:36:18.765657Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-08T23:36:18.762054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-08T23:36:18.765782Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-08T23:36:19.601529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-08T23:36:19.601723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-08T23:36:19.601837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-11-08T23:36:19.601927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-08T23:36:19.60202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-08T23:36:19.6021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-11-08T23:36:19.602185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-08T23:36:19.609615Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-118967 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T23:36:19.609814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:36:19.610373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:36:19.61113Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:36:19.612586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-08T23:36:19.612856Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:36:19.621767Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:36:19.621817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:36:19.633532Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:36:19.635206Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:36:19.635233Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [957f30f0af6b6757dc5a875bccdbf76383712d3961bed8c154338d8e2188f955] <==
	* 2023/11/08 23:38:00 GCP Auth Webhook started!
	2023/11/08 23:38:03 Ready to marshal response ...
	2023/11/08 23:38:03 Ready to write response ...
	2023/11/08 23:38:11 Ready to marshal response ...
	2023/11/08 23:38:11 Ready to write response ...
	2023/11/08 23:38:22 Ready to marshal response ...
	2023/11/08 23:38:22 Ready to write response ...
	2023/11/08 23:38:22 Ready to marshal response ...
	2023/11/08 23:38:22 Ready to write response ...
	2023/11/08 23:38:27 Ready to marshal response ...
	2023/11/08 23:38:27 Ready to write response ...
	2023/11/08 23:38:31 Ready to marshal response ...
	2023/11/08 23:38:31 Ready to write response ...
	2023/11/08 23:38:52 Ready to marshal response ...
	2023/11/08 23:38:52 Ready to write response ...
	2023/11/08 23:38:52 Ready to marshal response ...
	2023/11/08 23:38:52 Ready to write response ...
	2023/11/08 23:38:52 Ready to marshal response ...
	2023/11/08 23:38:52 Ready to write response ...
	2023/11/08 23:39:16 Ready to marshal response ...
	2023/11/08 23:39:16 Ready to write response ...
	2023/11/08 23:39:26 Ready to marshal response ...
	2023/11/08 23:39:26 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:39:52 up  6:22,  0 users,  load average: 0.72, 1.31, 1.52
	Linux addons-118967 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [d2c6b7564e349dc8d6d810bd0a4d6360e1035072e20874a7406c19c21d9a1410] <==
	* I1108 23:37:50.683981       1 main.go:227] handling current node
	I1108 23:38:00.699041       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:00.699135       1 main.go:227] handling current node
	I1108 23:38:10.711883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:10.711913       1 main.go:227] handling current node
	I1108 23:38:20.721682       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:20.721711       1 main.go:227] handling current node
	I1108 23:38:30.725966       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:30.725992       1 main.go:227] handling current node
	I1108 23:38:40.729766       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:40.729797       1 main.go:227] handling current node
	I1108 23:38:50.741030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:38:50.741057       1 main.go:227] handling current node
	I1108 23:39:00.745648       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:00.745729       1 main.go:227] handling current node
	I1108 23:39:10.754834       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:10.754865       1 main.go:227] handling current node
	I1108 23:39:20.765083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:20.765119       1 main.go:227] handling current node
	I1108 23:39:30.769553       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:30.769582       1 main.go:227] handling current node
	I1108 23:39:40.773491       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:40.773517       1 main.go:227] handling current node
	I1108 23:39:50.778191       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:39:50.778220       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ee136bcbc0a9187c8607dca2f2523a8f682474d1ba33855ea1c6147ffeea3d08] <==
	* I1108 23:38:44.603441       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.603754       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 23:38:44.637689       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.637894       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 23:38:44.652653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.652706       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 23:38:44.664412       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.664631       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 23:38:44.675004       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.675271       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 23:38:44.694064       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 23:38:44.694567       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1108 23:38:45.638741       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1108 23:38:45.694553       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1108 23:38:45.706960       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1108 23:38:47.954515       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1108 23:38:52.048973       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.145.105"}
	I1108 23:39:13.693515       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1108 23:39:13.702355       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1108 23:39:14.720368       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1108 23:39:16.481087       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 23:39:16.871539       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1108 23:39:16.880787       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.142.139"}
	I1108 23:39:26.853997       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.220.148"}
	E1108 23:39:43.219044       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400ba3d1a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400e8efa40), ResponseWriter:(*httpsnoop.rw)(0x400e8efa40), Flusher:(*httpsnoop.rw)(0x400e8efa40), CloseNotifier:(*httpsnoop.rw)(0x400e8efa40), Pusher:(*httpsnoop.rw)(0x400e8efa40)}}, encoder:(*versioning.codec)(0x400d339900), memAllocator:(*runtime.Allocator)(0x40046ca228)})
	
	* 
	* ==> kube-controller-manager [f3393685f2d9d688bfd113fcb2c8f46a175686d1ba500b6510a6fdbc5194803c] <==
	* W1108 23:39:25.665972       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1108 23:39:25.666009       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1108 23:39:26.573162       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1108 23:39:26.609145       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-m5k67"
	I1108 23:39:26.630664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.294127ms"
	I1108 23:39:26.647307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.601571ms"
	I1108 23:39:26.647494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="136.746µs"
	I1108 23:39:26.648383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="165.341µs"
	W1108 23:39:28.226193       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1108 23:39:28.226231       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1108 23:39:29.872668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.519µs"
	I1108 23:39:30.879759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.467µs"
	I1108 23:39:31.882992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.616µs"
	W1108 23:39:33.027072       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1108 23:39:33.027112       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1108 23:39:37.960398       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1108 23:39:37.960443       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 23:39:38.420500       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1108 23:39:38.420549       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 23:39:42.920066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.972µs"
	I1108 23:39:43.925109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.07µs"
	I1108 23:39:43.925533       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1108 23:39:43.935531       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1108 23:39:52.758808       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1108 23:39:52.758845       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1d8112397059256181dff44957275928f2b246cb17859a08fe368f205201eed6] <==
	* I1108 23:36:39.568903       1 server_others.go:69] "Using iptables proxy"
	I1108 23:36:39.600125       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1108 23:36:39.659265       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 23:36:39.663186       1 server_others.go:152] "Using iptables Proxier"
	I1108 23:36:39.663224       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 23:36:39.663231       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 23:36:39.663309       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 23:36:39.663526       1 server.go:846] "Version info" version="v1.28.3"
	I1108 23:36:39.663536       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:36:39.668531       1 config.go:188] "Starting service config controller"
	I1108 23:36:39.668555       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 23:36:39.668580       1 config.go:97] "Starting endpoint slice config controller"
	I1108 23:36:39.668584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 23:36:39.669417       1 config.go:315] "Starting node config controller"
	I1108 23:36:39.669425       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 23:36:39.773530       1 shared_informer.go:318] Caches are synced for node config
	I1108 23:36:39.773565       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:36:39.773628       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e8a25681fea82f272492f03fa0d9ae251c8cb0dc7794c188aa399aaf2cb023b7] <==
	* W1108 23:36:22.845031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:36:22.845164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:36:22.845230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 23:36:22.845250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 23:36:22.845291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:36:22.845310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 23:36:22.845353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:36:22.845369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 23:36:22.847498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:36:22.847535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 23:36:22.847597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:36:22.847614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 23:36:22.847652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 23:36:22.847668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 23:36:22.847758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 23:36:22.847781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 23:36:22.847816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:36:22.847834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 23:36:22.848810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 23:36:22.848840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 23:36:23.662076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 23:36:23.662116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 23:36:23.676151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:36:23.676188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1108 23:36:24.233548       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 08 23:39:31 addons-118967 kubelet[1353]: E1108 23:39:31.869002    1353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-m5k67_default(fa08315f-c33e-4240-a3f1-74f6420b66a9)\"" pod="default/hello-world-app-5d77478584-m5k67" podUID="fa08315f-c33e-4240-a3f1-74f6420b66a9"
	Nov 08 23:39:36 addons-118967 kubelet[1353]: I1108 23:39:36.592872    1353 scope.go:117] "RemoveContainer" containerID="c1e1930d989d10ca8a46c3dfccc16919ce0552c8ad3fb2648e058c47bd1319f0"
	Nov 08 23:39:36 addons-118967 kubelet[1353]: E1108 23:39:36.593146    1353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b6d054c9-6481-4bc7-95e4-3a3dbeba5047)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="b6d054c9-6481-4bc7-95e4-3a3dbeba5047"
	Nov 08 23:39:42 addons-118967 kubelet[1353]: I1108 23:39:42.593347    1353 scope.go:117] "RemoveContainer" containerID="e75190a8c49970e06e93a818a1de443a8aef30836f8213f2b16c82f589e7a947"
	Nov 08 23:39:42 addons-118967 kubelet[1353]: I1108 23:39:42.898677    1353 scope.go:117] "RemoveContainer" containerID="c1e1930d989d10ca8a46c3dfccc16919ce0552c8ad3fb2648e058c47bd1319f0"
	Nov 08 23:39:42 addons-118967 kubelet[1353]: I1108 23:39:42.904614    1353 scope.go:117] "RemoveContainer" containerID="b5c258d081e0fc41ef9f9705421dd57251226d19bc0a698deae7296e55c6d54a"
	Nov 08 23:39:42 addons-118967 kubelet[1353]: E1108 23:39:42.905149    1353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-m5k67_default(fa08315f-c33e-4240-a3f1-74f6420b66a9)\"" pod="default/hello-world-app-5d77478584-m5k67" podUID="fa08315f-c33e-4240-a3f1-74f6420b66a9"
	Nov 08 23:39:42 addons-118967 kubelet[1353]: I1108 23:39:42.913267    1353 scope.go:117] "RemoveContainer" containerID="e75190a8c49970e06e93a818a1de443a8aef30836f8213f2b16c82f589e7a947"
	Nov 08 23:39:43 addons-118967 kubelet[1353]: I1108 23:39:43.037116    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl962\" (UniqueName: \"kubernetes.io/projected/b6d054c9-6481-4bc7-95e4-3a3dbeba5047-kube-api-access-nl962\") pod \"b6d054c9-6481-4bc7-95e4-3a3dbeba5047\" (UID: \"b6d054c9-6481-4bc7-95e4-3a3dbeba5047\") "
	Nov 08 23:39:43 addons-118967 kubelet[1353]: I1108 23:39:43.040231    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d054c9-6481-4bc7-95e4-3a3dbeba5047-kube-api-access-nl962" (OuterVolumeSpecName: "kube-api-access-nl962") pod "b6d054c9-6481-4bc7-95e4-3a3dbeba5047" (UID: "b6d054c9-6481-4bc7-95e4-3a3dbeba5047"). InnerVolumeSpecName "kube-api-access-nl962". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 08 23:39:43 addons-118967 kubelet[1353]: I1108 23:39:43.137752    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nl962\" (UniqueName: \"kubernetes.io/projected/b6d054c9-6481-4bc7-95e4-3a3dbeba5047-kube-api-access-nl962\") on node \"addons-118967\" DevicePath \"\""
	Nov 08 23:39:43 addons-118967 kubelet[1353]: I1108 23:39:43.596738    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b6d054c9-6481-4bc7-95e4-3a3dbeba5047" path="/var/lib/kubelet/pods/b6d054c9-6481-4bc7-95e4-3a3dbeba5047/volumes"
	Nov 08 23:39:45 addons-118967 kubelet[1353]: I1108 23:39:45.595812    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c8e15af-0e93-40a4-ad3c-5dda12bd05e8" path="/var/lib/kubelet/pods/2c8e15af-0e93-40a4-ad3c-5dda12bd05e8/volumes"
	Nov 08 23:39:45 addons-118967 kubelet[1353]: I1108 23:39:45.596200    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="540dca89-9b74-40c2-b024-fb7c1a73a0e7" path="/var/lib/kubelet/pods/540dca89-9b74-40c2-b024-fb7c1a73a0e7/volumes"
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.277332    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c9477d4-4298-42fd-965e-4c41f6008a38-webhook-cert\") pod \"6c9477d4-4298-42fd-965e-4c41f6008a38\" (UID: \"6c9477d4-4298-42fd-965e-4c41f6008a38\") "
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.277412    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrn4f\" (UniqueName: \"kubernetes.io/projected/6c9477d4-4298-42fd-965e-4c41f6008a38-kube-api-access-wrn4f\") pod \"6c9477d4-4298-42fd-965e-4c41f6008a38\" (UID: \"6c9477d4-4298-42fd-965e-4c41f6008a38\") "
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.279812    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9477d4-4298-42fd-965e-4c41f6008a38-kube-api-access-wrn4f" (OuterVolumeSpecName: "kube-api-access-wrn4f") pod "6c9477d4-4298-42fd-965e-4c41f6008a38" (UID: "6c9477d4-4298-42fd-965e-4c41f6008a38"). InnerVolumeSpecName "kube-api-access-wrn4f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.282330    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9477d4-4298-42fd-965e-4c41f6008a38-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6c9477d4-4298-42fd-965e-4c41f6008a38" (UID: "6c9477d4-4298-42fd-965e-4c41f6008a38"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.377655    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wrn4f\" (UniqueName: \"kubernetes.io/projected/6c9477d4-4298-42fd-965e-4c41f6008a38-kube-api-access-wrn4f\") on node \"addons-118967\" DevicePath \"\""
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.377690    1353 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c9477d4-4298-42fd-965e-4c41f6008a38-webhook-cert\") on node \"addons-118967\" DevicePath \"\""
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.595724    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6c9477d4-4298-42fd-965e-4c41f6008a38" path="/var/lib/kubelet/pods/6c9477d4-4298-42fd-965e-4c41f6008a38/volumes"
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.920522    1353 scope.go:117] "RemoveContainer" containerID="b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac"
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.927650    1353 scope.go:117] "RemoveContainer" containerID="b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac"
	Nov 08 23:39:47 addons-118967 kubelet[1353]: E1108 23:39:47.928225    1353 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\": not found" containerID="b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac"
	Nov 08 23:39:47 addons-118967 kubelet[1353]: I1108 23:39:47.928274    1353 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac"} err="failed to get container status \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\": rpc error: code = NotFound desc = an error occurred when try to find container \"b469e34049cb1a3f1ee9ebe75a9e43d496f64444ea575a14563e6addc3328dac\": not found"
	
	* 
	* ==> storage-provisioner [768ff84fab9f37a363325980eb8df0cfc67d62176a53c7cfb62cfcbdbf1d77e7] <==
	* I1108 23:36:45.095223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 23:36:45.116187       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 23:36:45.116320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 23:36:45.145602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 23:36:45.145939       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-118967_31e1731d-5774-4c25-9147-f1e6939a33e1!
	I1108 23:36:45.149934       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cce39d9a-a439-4b4e-8f41-6705d45f1d7c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-118967_31e1731d-5774-4c25-9147-f1e6939a33e1 became leader
	I1108 23:36:45.246794       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-118967_31e1731d-5774-4c25-9147-f1e6939a33e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-118967 -n addons-118967
helpers_test.go:261: (dbg) Run:  kubectl --context addons-118967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr: (3.676220102s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-471648" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr: (3.325482855s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-471648" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.135716139s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-471648
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 image load --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr: (3.82047768s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-471648" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image save gcr.io/google-containers/addon-resizer:functional-471648 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1108 23:45:00.928787  784508 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:00.929505  784508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:00.929558  784508 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:00.929581  784508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:00.929896  784508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:45:00.930713  784508 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:00.930937  784508 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:00.931487  784508 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
	I1108 23:45:00.953669  784508 ssh_runner.go:195] Run: systemctl --version
	I1108 23:45:00.953758  784508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
	I1108 23:45:00.975658  784508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
	I1108 23:45:01.069150  784508 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1108 23:45:01.069228  784508 cache_images.go:254] Failed to load cached images for profile functional-471648. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1108 23:45:01.069252  784508 cache_images.go:262] succeeded pushing to: 
	I1108 23:45:01.069261  784508 cache_images.go:263] failed pushing to: functional-471648

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-316909 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-316909 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.493399921s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-316909 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-316909 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [de0d9351-354e-4258-a7c1-1a4a16d74e1f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [de0d9351-354e-4258-a7c1-1a4a16d74e1f] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.017044813s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-316909 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1108 23:48:00.824331  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.020872761s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons disable ingress-dns --alsologtostderr -v=1: (5.199448247s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons disable ingress --alsologtostderr -v=1: (7.610920765s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-316909
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-316909:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1",
	        "Created": "2023-11-08T23:46:02.520779265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-08T23:46:02.897037997Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1/hosts",
	        "LogPath": "/var/lib/docker/containers/2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1/2afd4e02fd272fbc4938240511a04433038f363a1fa94cad3eff5b0b9ec1f2f1-json.log",
	        "Name": "/ingress-addon-legacy-316909",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-316909:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-316909",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7dd99a5dee723062010e7937015a70ee32b6b242d99c3a1922fda4b7411625f4-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7dd99a5dee723062010e7937015a70ee32b6b242d99c3a1922fda4b7411625f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7dd99a5dee723062010e7937015a70ee32b6b242d99c3a1922fda4b7411625f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7dd99a5dee723062010e7937015a70ee32b6b242d99c3a1922fda4b7411625f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-316909",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-316909/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-316909",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-316909",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-316909",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22f485465535fe43c62594d0a1328ce755aaffe8d647deb9558a69c2598be75d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33722"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33721"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33718"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33720"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33719"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22f485465535",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-316909": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2afd4e02fd27",
	                        "ingress-addon-legacy-316909"
	                    ],
	                    "NetworkID": "d4f3fb327dcd482c97338f670b377c2ec1d5308780a53e6738a3f69d7a2e13d9",
	                    "EndpointID": "b3036ad7cd476538d028b95417e0bf743fe5e3d66244d85c68354046f5320fee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-316909 -n ingress-addon-legacy-316909
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-316909 logs -n 25: (1.456222887s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-471648                                                   | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-471648                                                   | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-471648                                                   | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-471648 ssh findmnt                                          | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-471648 ssh findmnt                                          | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-471648 ssh findmnt                                          | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-471648                                                   | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-471648 ssh pgrep                                            | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-471648 image build -t                                       | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | localhost/my-image:functional-471648                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-471648 image ls                                             | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	| image          | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-471648                                                      | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-471648                                                   | functional-471648           | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	| start          | -p ingress-addon-legacy-316909                                         | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:47 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-316909                                            | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:47 UTC | 08 Nov 23 23:47 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-316909                                            | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:47 UTC | 08 Nov 23 23:47 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-316909                                            | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:47 UTC | 08 Nov 23 23:47 UTC |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-316909 ip                                         | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:47 UTC | 08 Nov 23 23:47 UTC |
	| addons         | ingress-addon-legacy-316909                                            | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:48 UTC | 08 Nov 23 23:48 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-316909                                            | ingress-addon-legacy-316909 | jenkins | v1.32.0 | 08 Nov 23 23:48 UTC | 08 Nov 23 23:48 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:45:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:45:44.706193  788278 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:44.706411  788278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:44.706423  788278 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:44.706431  788278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:44.706813  788278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:45:44.707273  788278 out.go:303] Setting JSON to false
	I1108 23:45:44.708388  788278 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23294,"bootTime":1699463851,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:45:44.708462  788278 start.go:138] virtualization:  
	I1108 23:45:44.711039  788278 out.go:177] * [ingress-addon-legacy-316909] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 23:45:44.713452  788278 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:45:44.715230  788278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:45:44.713550  788278 notify.go:220] Checking for updates...
	I1108 23:45:44.719047  788278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:45:44.721240  788278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:45:44.722904  788278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 23:45:44.724694  788278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:45:44.726682  788278 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:45:44.751658  788278 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:45:44.751761  788278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:45:44.831633  788278 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-08 23:45:44.821670726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:45:44.831742  788278 docker.go:295] overlay module found
	I1108 23:45:44.833951  788278 out.go:177] * Using the docker driver based on user configuration
	I1108 23:45:44.835769  788278 start.go:298] selected driver: docker
	I1108 23:45:44.835795  788278 start.go:902] validating driver "docker" against <nil>
	I1108 23:45:44.835810  788278 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:45:44.836609  788278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:45:44.911139  788278 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-08 23:45:44.90155296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:45:44.911287  788278 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1108 23:45:44.911533  788278 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:45:44.913172  788278 out.go:177] * Using Docker driver with root privileges
	I1108 23:45:44.914841  788278 cni.go:84] Creating CNI manager for ""
	I1108 23:45:44.914861  788278 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:45:44.914874  788278 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 23:45:44.914889  788278 start_flags.go:323] config:
	{Name:ingress-addon-legacy-316909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-316909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:45:44.917032  788278 out.go:177] * Starting control plane node ingress-addon-legacy-316909 in cluster ingress-addon-legacy-316909
	I1108 23:45:44.918614  788278 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1108 23:45:44.920423  788278 out.go:177] * Pulling base image ...
	I1108 23:45:44.921968  788278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1108 23:45:44.922034  788278 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1108 23:45:44.939526  788278 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1108 23:45:44.939549  788278 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1108 23:45:45.001984  788278 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1108 23:45:45.002014  788278 cache.go:56] Caching tarball of preloaded images
	I1108 23:45:45.002200  788278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1108 23:45:45.004346  788278 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1108 23:45:45.006291  788278 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:45:45.172334  788278 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1108 23:45:54.578952  788278 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:45:54.579085  788278 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:45:55.765132  788278 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1108 23:45:55.765571  788278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/config.json ...
	I1108 23:45:55.765604  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/config.json: {Name:mkdcf654516f0d59a8ce53627f643184ab1f8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:45:55.765793  788278 cache.go:194] Successfully downloaded all kic artifacts
	I1108 23:45:55.765843  788278 start.go:365] acquiring machines lock for ingress-addon-legacy-316909: {Name:mka75e3215d405e594a7dbdd5beea1cd744e5d58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:45:55.765894  788278 start.go:369] acquired machines lock for "ingress-addon-legacy-316909" in 40.697µs
	I1108 23:45:55.765929  788278 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-316909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-316909 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:45:55.765995  788278 start.go:125] createHost starting for "" (driver="docker")
	I1108 23:45:55.768133  788278 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 23:45:55.768453  788278 start.go:159] libmachine.API.Create for "ingress-addon-legacy-316909" (driver="docker")
	I1108 23:45:55.768473  788278 client.go:168] LocalClient.Create starting
	I1108 23:45:55.768549  788278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem
	I1108 23:45:55.768595  788278 main.go:141] libmachine: Decoding PEM data...
	I1108 23:45:55.768611  788278 main.go:141] libmachine: Parsing certificate...
	I1108 23:45:55.768701  788278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem
	I1108 23:45:55.768725  788278 main.go:141] libmachine: Decoding PEM data...
	I1108 23:45:55.768748  788278 main.go:141] libmachine: Parsing certificate...
	I1108 23:45:55.769207  788278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-316909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 23:45:55.793627  788278 cli_runner.go:211] docker network inspect ingress-addon-legacy-316909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 23:45:55.793712  788278 network_create.go:281] running [docker network inspect ingress-addon-legacy-316909] to gather additional debugging logs...
	I1108 23:45:55.793734  788278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-316909
	W1108 23:45:55.811869  788278 cli_runner.go:211] docker network inspect ingress-addon-legacy-316909 returned with exit code 1
	I1108 23:45:55.811902  788278 network_create.go:284] error running [docker network inspect ingress-addon-legacy-316909]: docker network inspect ingress-addon-legacy-316909: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-316909 not found
	I1108 23:45:55.811919  788278 network_create.go:286] output of [docker network inspect ingress-addon-legacy-316909]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-316909 not found
	
	** /stderr **
	I1108 23:45:55.812032  788278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 23:45:55.830540  788278 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400049ac20}
	I1108 23:45:55.830576  788278 network_create.go:124] attempt to create docker network ingress-addon-legacy-316909 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 23:45:55.830635  788278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-316909 ingress-addon-legacy-316909
	I1108 23:45:55.912198  788278 network_create.go:108] docker network ingress-addon-legacy-316909 192.168.49.0/24 created
	I1108 23:45:55.912228  788278 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-316909" container
	I1108 23:45:55.912300  788278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 23:45:55.929226  788278 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-316909 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-316909 --label created_by.minikube.sigs.k8s.io=true
	I1108 23:45:55.947295  788278 oci.go:103] Successfully created a docker volume ingress-addon-legacy-316909
	I1108 23:45:55.947395  788278 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-316909-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-316909 --entrypoint /usr/bin/test -v ingress-addon-legacy-316909:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1108 23:45:57.470028  788278 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-316909-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-316909 --entrypoint /usr/bin/test -v ingress-addon-legacy-316909:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.522590196s)
	I1108 23:45:57.470057  788278 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-316909
	I1108 23:45:57.470076  788278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1108 23:45:57.470096  788278 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 23:45:57.470181  788278 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-316909:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 23:46:02.435649  788278 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-316909:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.96542484s)
	I1108 23:46:02.435682  788278 kic.go:203] duration metric: took 4.965583 seconds to extract preloaded images to volume
	W1108 23:46:02.435831  788278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 23:46:02.435954  788278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 23:46:02.504624  788278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-316909 --name ingress-addon-legacy-316909 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-316909 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-316909 --network ingress-addon-legacy-316909 --ip 192.168.49.2 --volume ingress-addon-legacy-316909:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1108 23:46:02.905010  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Running}}
	I1108 23:46:02.928893  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:02.952998  788278 cli_runner.go:164] Run: docker exec ingress-addon-legacy-316909 stat /var/lib/dpkg/alternatives/iptables
	I1108 23:46:03.028929  788278 oci.go:144] the created container "ingress-addon-legacy-316909" has a running status.
	I1108 23:46:03.028957  788278 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa...
	I1108 23:46:03.287541  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1108 23:46:03.287590  788278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 23:46:03.324837  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:03.348098  788278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 23:46:03.348122  788278 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-316909 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 23:46:03.428639  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:03.463424  788278 machine.go:88] provisioning docker machine ...
	I1108 23:46:03.463458  788278 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-316909"
	I1108 23:46:03.463525  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:03.496778  788278 main.go:141] libmachine: Using SSH client type: native
	I1108 23:46:03.497263  788278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33722 <nil> <nil>}
	I1108 23:46:03.497278  788278 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-316909 && echo "ingress-addon-legacy-316909" | sudo tee /etc/hostname
	I1108 23:46:03.498076  788278 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 23:46:06.640318  788278 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-316909
	
	I1108 23:46:06.640435  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:06.664230  788278 main.go:141] libmachine: Using SSH client type: native
	I1108 23:46:06.664649  788278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33722 <nil> <nil>}
	I1108 23:46:06.664675  788278 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-316909' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-316909/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-316909' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:46:06.790872  788278 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:46:06.790901  788278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17586-749551/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-749551/.minikube}
	I1108 23:46:06.790922  788278 ubuntu.go:177] setting up certificates
	I1108 23:46:06.790931  788278 provision.go:83] configureAuth start
	I1108 23:46:06.790993  788278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-316909
	I1108 23:46:06.808674  788278 provision.go:138] copyHostCerts
	I1108 23:46:06.808718  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1108 23:46:06.808756  788278 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1108 23:46:06.808769  788278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1108 23:46:06.808847  788278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1108 23:46:06.808927  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1108 23:46:06.808948  788278 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1108 23:46:06.808957  788278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1108 23:46:06.808994  788278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1108 23:46:06.809039  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1108 23:46:06.809059  788278 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1108 23:46:06.809065  788278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1108 23:46:06.809089  788278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1108 23:46:06.809137  788278 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-316909 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-316909]
	I1108 23:46:07.035687  788278 provision.go:172] copyRemoteCerts
	I1108 23:46:07.035756  788278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:46:07.035804  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:07.054615  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:07.148573  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1108 23:46:07.148637  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:46:07.178305  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1108 23:46:07.178375  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:46:07.208581  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1108 23:46:07.208643  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1108 23:46:07.238298  788278 provision.go:86] duration metric: configureAuth took 447.352531ms
	I1108 23:46:07.238324  788278 ubuntu.go:193] setting minikube options for container-runtime
	I1108 23:46:07.238518  788278 config.go:182] Loaded profile config "ingress-addon-legacy-316909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1108 23:46:07.238525  788278 machine.go:91] provisioned docker machine in 3.775080525s
	I1108 23:46:07.238531  788278 client.go:171] LocalClient.Create took 11.470053519s
	I1108 23:46:07.238554  788278 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-316909" took 11.470102068s
	I1108 23:46:07.238564  788278 start.go:300] post-start starting for "ingress-addon-legacy-316909" (driver="docker")
	I1108 23:46:07.238572  788278 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:46:07.238628  788278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:46:07.238666  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:07.256610  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:07.348487  788278 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:46:07.352842  788278 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 23:46:07.352889  788278 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1108 23:46:07.352905  788278 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1108 23:46:07.352912  788278 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1108 23:46:07.352922  788278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-749551/.minikube/addons for local assets ...
	I1108 23:46:07.352983  788278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-749551/.minikube/files for local assets ...
	I1108 23:46:07.353065  788278 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem -> 7549022.pem in /etc/ssl/certs
	I1108 23:46:07.353072  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem -> /etc/ssl/certs/7549022.pem
	I1108 23:46:07.353178  788278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 23:46:07.363540  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem --> /etc/ssl/certs/7549022.pem (1708 bytes)
	I1108 23:46:07.392663  788278 start.go:303] post-start completed in 154.08565ms
	I1108 23:46:07.393034  788278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-316909
	I1108 23:46:07.409996  788278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/config.json ...
	I1108 23:46:07.410266  788278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:46:07.410323  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:07.428949  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:07.520108  788278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 23:46:07.526155  788278 start.go:128] duration metric: createHost completed in 11.760145437s
	I1108 23:46:07.526182  788278 start.go:83] releasing machines lock for "ingress-addon-legacy-316909", held for 11.760279583s
	I1108 23:46:07.526259  788278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-316909
	I1108 23:46:07.544677  788278 ssh_runner.go:195] Run: cat /version.json
	I1108 23:46:07.544742  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:07.544994  788278 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:46:07.545049  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:07.565587  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:07.577875  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:07.654087  788278 ssh_runner.go:195] Run: systemctl --version
	I1108 23:46:07.858876  788278 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1108 23:46:07.864644  788278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1108 23:46:07.895591  788278 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1108 23:46:07.895671  788278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:46:07.929126  788278 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1108 23:46:07.929153  788278 start.go:472] detecting cgroup driver to use...
	I1108 23:46:07.929195  788278 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1108 23:46:07.929275  788278 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:46:07.946464  788278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:46:07.960112  788278 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:46:07.960232  788278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:46:07.976796  788278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:46:07.994031  788278 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:46:08.091173  788278 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:46:08.205189  788278 docker.go:219] disabling docker service ...
	I1108 23:46:08.205278  788278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:46:08.229086  788278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:46:08.243933  788278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:46:08.351197  788278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:46:08.450918  788278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:46:08.465189  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:46:08.485310  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1108 23:46:08.497568  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:46:08.510991  788278 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:46:08.511110  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:46:08.523639  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:46:08.536373  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:46:08.548690  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:46:08.561093  788278 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:46:08.572989  788278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:46:08.585247  788278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:46:08.596363  788278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:46:08.607148  788278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:46:08.709085  788278 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:46:08.839869  788278 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:46:08.839966  788278 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:46:08.845098  788278 start.go:540] Will wait 60s for crictl version
	I1108 23:46:08.845175  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:08.849519  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:46:08.891076  788278 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1108 23:46:08.891147  788278 ssh_runner.go:195] Run: containerd --version
	I1108 23:46:08.918324  788278 ssh_runner.go:195] Run: containerd --version
	I1108 23:46:08.952541  788278 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.24 ...
	I1108 23:46:08.954561  788278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-316909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 23:46:08.971917  788278 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 23:46:08.976495  788278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 23:46:08.989960  788278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1108 23:46:08.990027  788278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:46:09.034917  788278 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1108 23:46:09.034999  788278 ssh_runner.go:195] Run: which lz4
	I1108 23:46:09.039869  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1108 23:46:09.039981  788278 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 23:46:09.044680  788278 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 23:46:09.044717  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1108 23:46:11.298161  788278 containerd.go:547] Took 2.258209 seconds to copy over tarball
	I1108 23:46:11.298301  788278 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 23:46:14.038368  788278 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.740021948s)
	I1108 23:46:14.038398  788278 containerd.go:554] Took 2.740159 seconds to extract the tarball
	I1108 23:46:14.038409  788278 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 23:46:14.124692  788278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:46:14.230443  788278 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:46:14.373830  788278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:46:14.417796  788278 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1108 23:46:14.417822  788278 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 23:46:14.417860  788278 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:46:14.418047  788278 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1108 23:46:14.418132  788278 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1108 23:46:14.418213  788278 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1108 23:46:14.418305  788278 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1108 23:46:14.418395  788278 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1108 23:46:14.418466  788278 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1108 23:46:14.418543  788278 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1108 23:46:14.419396  788278 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1108 23:46:14.419833  788278 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:46:14.420093  788278 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1108 23:46:14.420228  788278 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1108 23:46:14.420340  788278 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1108 23:46:14.420451  788278 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1108 23:46:14.420585  788278 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1108 23:46:14.420652  788278 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W1108 23:46:14.970832  788278 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:14.970988  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W1108 23:46:15.001915  788278 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.002213  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W1108 23:46:15.018713  788278 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.018908  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	I1108 23:46:15.019095  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W1108 23:46:15.032285  788278 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.032513  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W1108 23:46:15.039966  788278 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.040169  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W1108 23:46:15.049292  788278 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.049539  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W1108 23:46:15.301372  788278 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1108 23:46:15.301509  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1108 23:46:15.571555  788278 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1108 23:46:15.571620  788278 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1108 23:46:15.571678  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.572351  788278 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1108 23:46:15.572388  788278 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1108 23:46:15.572422  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.691109  788278 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1108 23:46:15.691190  788278 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1108 23:46:15.691270  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.691377  788278 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1108 23:46:15.691412  788278 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1108 23:46:15.691481  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.802619  788278 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1108 23:46:15.802701  788278 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1108 23:46:15.802785  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.802896  788278 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1108 23:46:15.802934  788278 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1108 23:46:15.802976  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.822624  788278 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1108 23:46:15.822706  788278 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1108 23:46:15.822785  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.912326  788278 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1108 23:46:15.912412  788278 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:46:15.912497  788278 ssh_runner.go:195] Run: which crictl
	I1108 23:46:15.912632  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1108 23:46:15.912721  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1108 23:46:15.912821  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1108 23:46:15.912882  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1108 23:46:15.912958  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1108 23:46:15.913014  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1108 23:46:15.913067  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1108 23:46:15.922151  788278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:46:16.135645  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1108 23:46:16.135720  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1108 23:46:16.135759  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1108 23:46:16.135796  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1108 23:46:16.135832  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1108 23:46:16.135871  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1108 23:46:16.135918  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1108 23:46:16.135945  788278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 23:46:16.135994  788278 cache_images.go:92] LoadImages completed in 1.718158115s
	W1108 23:46:16.136090  788278 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1108 23:46:16.136150  788278 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:46:16.177892  788278 cni.go:84] Creating CNI manager for ""
	I1108 23:46:16.177917  788278 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:46:16.177948  788278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:46:16.177968  788278 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-316909 NodeName:ingress-addon-legacy-316909 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 23:46:16.178100  788278 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-316909"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:46:16.178174  788278 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-316909 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-316909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 23:46:16.178239  788278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1108 23:46:16.189157  788278 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:46:16.189244  788278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:46:16.200202  788278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1108 23:46:16.222496  788278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1108 23:46:16.244607  788278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1108 23:46:16.266296  788278 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 23:46:16.270915  788278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 23:46:16.284231  788278 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909 for IP: 192.168.49.2
	I1108 23:46:16.284263  788278 certs.go:190] acquiring lock for shared ca certs: {Name:mk3980826f8d7f07af38edd9b91f2a0fe0b143c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:16.284394  788278 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key
	I1108 23:46:16.284435  788278 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key
	I1108 23:46:16.284489  788278 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key
	I1108 23:46:16.284506  788278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt with IP's: []
	I1108 23:46:16.880034  788278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt ...
	I1108 23:46:16.880068  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: {Name:mk73d1a5cae7277a52b35711378214167175e4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:16.880297  788278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key ...
	I1108 23:46:16.880312  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key: {Name:mk0fbdeb3a972c51ddc523be2c3c4d673e555ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:16.880411  788278 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key.dd3b5fb2
	I1108 23:46:16.880429  788278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1108 23:46:17.592613  788278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt.dd3b5fb2 ...
	I1108 23:46:17.592647  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt.dd3b5fb2: {Name:mk655b3d6c1369d85a9eb01dd360586d654e116e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:17.592835  788278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key.dd3b5fb2 ...
	I1108 23:46:17.592850  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key.dd3b5fb2: {Name:mk9bd5e80aebddfd7c93da542a0c1347e01144fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:17.592932  788278 certs.go:337] copying /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt
	I1108 23:46:17.593012  788278 certs.go:341] copying /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key
	I1108 23:46:17.593077  788278 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.key
	I1108 23:46:17.593095  788278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.crt with IP's: []
	I1108 23:46:18.148999  788278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.crt ...
	I1108 23:46:18.149033  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.crt: {Name:mk7116de570650942b446318a83dc16219b26e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:18.149300  788278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.key ...
	I1108 23:46:18.149316  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.key: {Name:mk487e4335cdb5bb0d60654af9c6360ce2aa6e1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:18.149410  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1108 23:46:18.149454  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1108 23:46:18.149469  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1108 23:46:18.149485  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1108 23:46:18.149506  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1108 23:46:18.149524  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1108 23:46:18.149566  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1108 23:46:18.149582  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1108 23:46:18.149642  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/754902.pem (1338 bytes)
	W1108 23:46:18.149685  788278 certs.go:433] ignoring /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/754902_empty.pem, impossibly tiny 0 bytes
	I1108 23:46:18.149702  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:46:18.149736  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:46:18.149769  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:46:18.149796  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem (1679 bytes)
	I1108 23:46:18.149846  788278 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem (1708 bytes)
	I1108 23:46:18.149878  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/754902.pem -> /usr/share/ca-certificates/754902.pem
	I1108 23:46:18.149897  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem -> /usr/share/ca-certificates/7549022.pem
	I1108 23:46:18.149921  788278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:46:18.150559  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:46:18.180258  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 23:46:18.210029  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:46:18.239882  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 23:46:18.270054  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:46:18.299412  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:46:18.329299  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:46:18.359072  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 23:46:18.388169  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/certs/754902.pem --> /usr/share/ca-certificates/754902.pem (1338 bytes)
	I1108 23:46:18.417112  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/ssl/certs/7549022.pem --> /usr/share/ca-certificates/7549022.pem (1708 bytes)
	I1108 23:46:18.446048  788278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:46:18.475575  788278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:46:18.497597  788278 ssh_runner.go:195] Run: openssl version
	I1108 23:46:18.505026  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7549022.pem && ln -fs /usr/share/ca-certificates/7549022.pem /etc/ssl/certs/7549022.pem"
	I1108 23:46:18.517003  788278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7549022.pem
	I1108 23:46:18.521778  788278 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  8 23:42 /usr/share/ca-certificates/7549022.pem
	I1108 23:46:18.521892  788278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7549022.pem
	I1108 23:46:18.530740  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7549022.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 23:46:18.543513  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:46:18.555270  788278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:46:18.560038  788278 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:46:18.560113  788278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:46:18.568842  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:46:18.580480  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754902.pem && ln -fs /usr/share/ca-certificates/754902.pem /etc/ssl/certs/754902.pem"
	I1108 23:46:18.592175  788278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754902.pem
	I1108 23:46:18.597119  788278 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  8 23:42 /usr/share/ca-certificates/754902.pem
	I1108 23:46:18.597211  788278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754902.pem
	I1108 23:46:18.606049  788278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754902.pem /etc/ssl/certs/51391683.0"
	I1108 23:46:18.618115  788278 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:46:18.622565  788278 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1108 23:46:18.622676  788278 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-316909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-316909 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:46:18.622773  788278 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:46:18.622833  788278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:46:18.678089  788278 cri.go:89] found id: ""
	I1108 23:46:18.678211  788278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:46:18.689131  788278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:46:18.699966  788278 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1108 23:46:18.700041  788278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:46:18.711007  788278 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 23:46:18.711056  788278 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 23:46:18.769467  788278 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1108 23:46:18.769555  788278 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 23:46:18.821869  788278 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1108 23:46:18.821946  788278 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1108 23:46:18.821985  788278 kubeadm.go:322] OS: Linux
	I1108 23:46:18.822033  788278 kubeadm.go:322] CGROUPS_CPU: enabled
	I1108 23:46:18.822083  788278 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1108 23:46:18.822132  788278 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1108 23:46:18.822182  788278 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1108 23:46:18.822231  788278 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1108 23:46:18.822280  788278 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1108 23:46:18.915321  788278 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 23:46:18.915489  788278 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 23:46:18.915626  788278 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 23:46:19.157564  788278 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 23:46:19.159187  788278 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 23:46:19.159345  788278 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 23:46:19.269302  788278 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 23:46:19.272596  788278 out.go:204]   - Generating certificates and keys ...
	I1108 23:46:19.272868  788278 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 23:46:19.272973  788278 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 23:46:20.170460  788278 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 23:46:20.490624  788278 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1108 23:46:21.302546  788278 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1108 23:46:21.426894  788278 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1108 23:46:22.018235  788278 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1108 23:46:22.018585  788278 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-316909 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 23:46:22.346416  788278 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1108 23:46:22.346790  788278 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-316909 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 23:46:22.722574  788278 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 23:46:23.006995  788278 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 23:46:23.628608  788278 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1108 23:46:23.629008  788278 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 23:46:23.888965  788278 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 23:46:24.551608  788278 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 23:46:24.847720  788278 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 23:46:25.377484  788278 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 23:46:25.378778  788278 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 23:46:25.380688  788278 out.go:204]   - Booting up control plane ...
	I1108 23:46:25.380879  788278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 23:46:25.389963  788278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 23:46:25.391881  788278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 23:46:25.393384  788278 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 23:46:25.396186  788278 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 23:46:37.398912  788278 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002613 seconds
	I1108 23:46:37.399030  788278 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 23:46:37.415670  788278 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 23:46:37.944515  788278 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 23:46:37.944700  788278 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-316909 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 23:46:38.452578  788278 kubeadm.go:322] [bootstrap-token] Using token: k9yweh.z3s22en8pphaieki
	I1108 23:46:38.454372  788278 out.go:204]   - Configuring RBAC rules ...
	I1108 23:46:38.454492  788278 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 23:46:38.459734  788278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 23:46:38.470485  788278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 23:46:38.473678  788278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 23:46:38.477046  788278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 23:46:38.480075  788278 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 23:46:38.494843  788278 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 23:46:38.766801  788278 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 23:46:38.881818  788278 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 23:46:38.889293  788278 kubeadm.go:322] 
	I1108 23:46:38.889367  788278 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 23:46:38.889373  788278 kubeadm.go:322] 
	I1108 23:46:38.889493  788278 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 23:46:38.889498  788278 kubeadm.go:322] 
	I1108 23:46:38.889522  788278 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 23:46:38.889578  788278 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 23:46:38.889648  788278 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 23:46:38.889663  788278 kubeadm.go:322] 
	I1108 23:46:38.889716  788278 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 23:46:38.889796  788278 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 23:46:38.889860  788278 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 23:46:38.889865  788278 kubeadm.go:322] 
	I1108 23:46:38.889943  788278 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 23:46:38.890015  788278 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 23:46:38.890020  788278 kubeadm.go:322] 
	I1108 23:46:38.890098  788278 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k9yweh.z3s22en8pphaieki \
	I1108 23:46:38.890197  788278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab3ece522733f055757c65e666fc1044b61a233d0aa5f64decfdb326c72a9a27 \
	I1108 23:46:38.890220  788278 kubeadm.go:322]     --control-plane 
	I1108 23:46:38.890224  788278 kubeadm.go:322] 
	I1108 23:46:38.890304  788278 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 23:46:38.890308  788278 kubeadm.go:322] 
	I1108 23:46:38.890386  788278 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k9yweh.z3s22en8pphaieki \
	I1108 23:46:38.890484  788278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab3ece522733f055757c65e666fc1044b61a233d0aa5f64decfdb326c72a9a27 
	I1108 23:46:38.893258  788278 kubeadm.go:322] W1108 23:46:18.768719    1110 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1108 23:46:38.893569  788278 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1108 23:46:38.893680  788278 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 23:46:38.893830  788278 kubeadm.go:322] W1108 23:46:25.389979    1110 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1108 23:46:38.893981  788278 kubeadm.go:322] W1108 23:46:25.392024    1110 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1108 23:46:38.893995  788278 cni.go:84] Creating CNI manager for ""
	I1108 23:46:38.894004  788278 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:46:38.896041  788278 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1108 23:46:38.897701  788278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 23:46:38.902928  788278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1108 23:46:38.902955  788278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1108 23:46:38.925072  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 23:46:39.349295  788278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:46:39.349457  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:39.349535  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e minikube.k8s.io/name=ingress-addon-legacy-316909 minikube.k8s.io/updated_at=2023_11_08T23_46_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:39.502299  788278 ops.go:34] apiserver oom_adj: -16
	I1108 23:46:39.502423  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:39.599848  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:40.194726  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:40.695071  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:41.194098  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:41.694902  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:42.194650  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:42.694917  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:43.194100  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:43.694139  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:44.194749  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:44.694245  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:45.194135  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:45.694131  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:46.194116  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:46.694117  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:47.194380  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:47.694664  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:48.194704  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:48.694634  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:49.194659  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:49.694681  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:50.194887  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:50.694884  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:51.194116  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:51.694699  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:52.194693  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:52.694116  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:53.194704  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:53.694267  788278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 23:46:53.880107  788278 kubeadm.go:1081] duration metric: took 14.530689359s to wait for elevateKubeSystemPrivileges.
	I1108 23:46:53.880135  788278 kubeadm.go:406] StartCluster complete in 35.257468241s
	I1108 23:46:53.880151  788278 settings.go:142] acquiring lock: {Name:mk7d57467a4d6a0a6ec02c87b75e10e0424576f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:53.880210  788278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:46:53.880891  788278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-749551/kubeconfig: {Name:mk63034fab281bd30b4004637fdc41282aa952da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:46:53.881664  788278 kapi.go:59] client config for ingress-addon-legacy-316909: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt", KeyFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key", CAFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 23:46:53.882989  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:46:53.883701  788278 cert_rotation.go:137] Starting client certificate rotation controller
	I1108 23:46:53.883630  788278 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 23:46:53.884033  788278 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-316909"
	I1108 23:46:53.884051  788278 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-316909"
	I1108 23:46:53.884109  788278 host.go:66] Checking if "ingress-addon-legacy-316909" exists ...
	I1108 23:46:53.884584  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:53.884707  788278 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-316909"
	I1108 23:46:53.884725  788278 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-316909"
	I1108 23:46:53.884952  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:53.885567  788278 config.go:182] Loaded profile config "ingress-addon-legacy-316909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1108 23:46:53.947586  788278 kapi.go:59] client config for ingress-addon-legacy-316909: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt", KeyFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key", CAFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 23:46:53.947847  788278 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-316909"
	I1108 23:46:53.947885  788278 host.go:66] Checking if "ingress-addon-legacy-316909" exists ...
	I1108 23:46:53.948355  788278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-316909 --format={{.State.Status}}
	I1108 23:46:53.957066  788278 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:46:53.961142  788278 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:46:53.961166  788278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:46:53.961242  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:53.990416  788278 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-316909" context rescaled to 1 replicas
	I1108 23:46:53.990453  788278 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:46:53.992724  788278 out.go:177] * Verifying Kubernetes components...
	I1108 23:46:53.995907  788278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:46:54.001346  788278 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:46:54.001375  788278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:46:54.001496  788278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-316909
	I1108 23:46:54.024943  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:54.060640  788278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33722 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/ingress-addon-legacy-316909/id_rsa Username:docker}
	I1108 23:46:54.376341  788278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 23:46:54.391202  788278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 23:46:54.391874  788278 kapi.go:59] client config for ingress-addon-legacy-316909: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt", KeyFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.key", CAFile:"/home/jenkins/minikube-integration/17586-749551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 23:46:54.392139  788278 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-316909" to be "Ready" ...
	I1108 23:46:54.395605  788278 node_ready.go:49] node "ingress-addon-legacy-316909" has status "Ready":"True"
	I1108 23:46:54.395630  788278 node_ready.go:38] duration metric: took 3.464979ms waiting for node "ingress-addon-legacy-316909" to be "Ready" ...
	I1108 23:46:54.395641  788278 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:46:54.404281  788278 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8z84m" in "kube-system" namespace to be "Ready" ...
	I1108 23:46:54.430883  788278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:46:54.935718  788278 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 23:46:55.021053  788278 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1108 23:46:55.023341  788278 addons.go:502] enable addons completed in 1.13968488s: enabled=[default-storageclass storage-provisioner]
	I1108 23:46:56.425598  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:46:58.921597  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:00.922185  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:03.422835  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:05.922448  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:07.924039  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:10.421684  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:12.421853  788278 pod_ready.go:102] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"False"
	I1108 23:47:13.922053  788278 pod_ready.go:92] pod "coredns-66bff467f8-8z84m" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:13.922084  788278 pod_ready.go:81] duration metric: took 19.517760112s waiting for pod "coredns-66bff467f8-8z84m" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.922096  788278 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fg9sn" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.924942  788278 pod_ready.go:97] error getting pod "coredns-66bff467f8-fg9sn" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-fg9sn" not found
	I1108 23:47:13.924973  788278 pod_ready.go:81] duration metric: took 2.869142ms waiting for pod "coredns-66bff467f8-fg9sn" in "kube-system" namespace to be "Ready" ...
	E1108 23:47:13.924984  788278 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-fg9sn" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-fg9sn" not found
	I1108 23:47:13.924995  788278 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.930875  788278 pod_ready.go:92] pod "etcd-ingress-addon-legacy-316909" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:13.930904  788278 pod_ready.go:81] duration metric: took 5.892704ms waiting for pod "etcd-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.930920  788278 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.936667  788278 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-316909" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:13.936691  788278 pod_ready.go:81] duration metric: took 5.762743ms waiting for pod "kube-apiserver-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.936708  788278 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.942563  788278 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-316909" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:13.942589  788278 pod_ready.go:81] duration metric: took 5.85606ms waiting for pod "kube-controller-manager-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:13.942602  788278 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwfqm" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:14.116534  788278 request.go:629] Waited for 171.224798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-316909
	I1108 23:47:14.119326  788278 pod_ready.go:92] pod "kube-proxy-xwfqm" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:14.119352  788278 pod_ready.go:81] duration metric: took 176.741486ms waiting for pod "kube-proxy-xwfqm" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:14.119364  788278 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:14.316819  788278 request.go:629] Waited for 197.35191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-316909
	I1108 23:47:14.516490  788278 request.go:629] Waited for 196.255332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-316909
	I1108 23:47:14.519502  788278 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-316909" in "kube-system" namespace has status "Ready":"True"
	I1108 23:47:14.519529  788278 pod_ready.go:81] duration metric: took 400.155008ms waiting for pod "kube-scheduler-ingress-addon-legacy-316909" in "kube-system" namespace to be "Ready" ...
	I1108 23:47:14.519540  788278 pod_ready.go:38] duration metric: took 20.123887823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:47:14.519556  788278 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:47:14.519619  788278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:47:14.533613  788278 api_server.go:72] duration metric: took 20.543127489s to wait for apiserver process to appear ...
	I1108 23:47:14.533647  788278 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:47:14.533667  788278 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 23:47:14.542725  788278 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 23:47:14.543623  788278 api_server.go:141] control plane version: v1.18.20
	I1108 23:47:14.543649  788278 api_server.go:131] duration metric: took 9.99389ms to wait for apiserver health ...
	I1108 23:47:14.543658  788278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:47:14.717368  788278 request.go:629] Waited for 173.624683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1108 23:47:14.723409  788278 system_pods.go:59] 8 kube-system pods found
	I1108 23:47:14.723445  788278 system_pods.go:61] "coredns-66bff467f8-8z84m" [192aca36-897e-4382-a573-2d7381a6221d] Running
	I1108 23:47:14.723452  788278 system_pods.go:61] "etcd-ingress-addon-legacy-316909" [03626929-6e2a-4e8d-a154-7f93b8bed60d] Running
	I1108 23:47:14.723457  788278 system_pods.go:61] "kindnet-x2bp8" [6fb4bbac-1ca0-4b46-8f83-40bb883e6ac9] Running
	I1108 23:47:14.723487  788278 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-316909" [c1d8e4a3-59a3-4238-b73a-55d29330f22e] Running
	I1108 23:47:14.723502  788278 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-316909" [a028715e-c710-44e7-bb8f-96e47d3e9f25] Running
	I1108 23:47:14.723507  788278 system_pods.go:61] "kube-proxy-xwfqm" [4e82489f-d162-4b81-ae20-e41a4daf165b] Running
	I1108 23:47:14.723512  788278 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-316909" [04b6b9b6-20da-4be9-864d-e421e4de3431] Running
	I1108 23:47:14.723521  788278 system_pods.go:61] "storage-provisioner" [89b3ff56-f759-4def-914a-ed2aa81d4f3a] Running
	I1108 23:47:14.723527  788278 system_pods.go:74] duration metric: took 179.863059ms to wait for pod list to return data ...
	I1108 23:47:14.723536  788278 default_sa.go:34] waiting for default service account to be created ...
	I1108 23:47:14.916971  788278 request.go:629] Waited for 193.331897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1108 23:47:14.919423  788278 default_sa.go:45] found service account: "default"
	I1108 23:47:14.919451  788278 default_sa.go:55] duration metric: took 195.905836ms for default service account to be created ...
	I1108 23:47:14.919460  788278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 23:47:15.116834  788278 request.go:629] Waited for 197.278327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1108 23:47:15.124357  788278 system_pods.go:86] 8 kube-system pods found
	I1108 23:47:15.124396  788278 system_pods.go:89] "coredns-66bff467f8-8z84m" [192aca36-897e-4382-a573-2d7381a6221d] Running
	I1108 23:47:15.124405  788278 system_pods.go:89] "etcd-ingress-addon-legacy-316909" [03626929-6e2a-4e8d-a154-7f93b8bed60d] Running
	I1108 23:47:15.124414  788278 system_pods.go:89] "kindnet-x2bp8" [6fb4bbac-1ca0-4b46-8f83-40bb883e6ac9] Running
	I1108 23:47:15.124420  788278 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-316909" [c1d8e4a3-59a3-4238-b73a-55d29330f22e] Running
	I1108 23:47:15.124425  788278 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-316909" [a028715e-c710-44e7-bb8f-96e47d3e9f25] Running
	I1108 23:47:15.124432  788278 system_pods.go:89] "kube-proxy-xwfqm" [4e82489f-d162-4b81-ae20-e41a4daf165b] Running
	I1108 23:47:15.124437  788278 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-316909" [04b6b9b6-20da-4be9-864d-e421e4de3431] Running
	I1108 23:47:15.124441  788278 system_pods.go:89] "storage-provisioner" [89b3ff56-f759-4def-914a-ed2aa81d4f3a] Running
	I1108 23:47:15.124449  788278 system_pods.go:126] duration metric: took 204.982396ms to wait for k8s-apps to be running ...
	I1108 23:47:15.124463  788278 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 23:47:15.124530  788278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:47:15.142327  788278 system_svc.go:56] duration metric: took 17.853536ms WaitForService to wait for kubelet.
	I1108 23:47:15.142402  788278 kubeadm.go:581] duration metric: took 21.151923571s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 23:47:15.142434  788278 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:47:15.317097  788278 request.go:629] Waited for 174.523518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1108 23:47:15.319933  788278 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 23:47:15.319977  788278 node_conditions.go:123] node cpu capacity is 2
	I1108 23:47:15.319992  788278 node_conditions.go:105] duration metric: took 177.549426ms to run NodePressure ...
	I1108 23:47:15.320014  788278 start.go:228] waiting for startup goroutines ...
	I1108 23:47:15.320028  788278 start.go:233] waiting for cluster config update ...
	I1108 23:47:15.320039  788278 start.go:242] writing updated cluster config ...
	I1108 23:47:15.320319  788278 ssh_runner.go:195] Run: rm -f paused
	I1108 23:47:15.378033  788278 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1108 23:47:15.380159  788278 out.go:177] 
	W1108 23:47:15.381783  788278 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1108 23:47:15.383190  788278 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1108 23:47:15.384783  788278 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-316909" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	54c5680217c05       dd1b12fcb6097       11 seconds ago       Exited              hello-world-app           2                   88ef59b810e8a       hello-world-app-5f5d8b66bb-7x9vq
	8de229e5590be       aae348c9fbd40       37 seconds ago       Running             nginx                     0                   43d22eeaed5e4       nginx
	ae54e2b9bae56       d7f0cba3aa5bf       57 seconds ago       Exited              controller                0                   2eec13862a2ad       ingress-nginx-controller-7fcf777cb7-vm6kf
	ed39ad62d9e0a       a883f7fc35610       About a minute ago   Exited              patch                     0                   e52a151107c30       ingress-nginx-admission-patch-7f62r
	16e2ae01d0725       a883f7fc35610       About a minute ago   Exited              create                    0                   e972e31dff65d       ingress-nginx-admission-create-rhsn9
	3298cb0af2d5e       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   56c7257ecc73d       coredns-66bff467f8-8z84m
	e69bad241d6a8       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   b37c43fa52da0       storage-provisioner
	0c32f47061e32       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   1825a3c2a8601       kindnet-x2bp8
	a696f4e36dcfa       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   5c0dd6fdc21b6       kube-proxy-xwfqm
	c7b33eb2a87c1       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   5a7117e98a60d       kube-apiserver-ingress-addon-legacy-316909
	a0a61502ad1ea       095f37015706d       About a minute ago   Running             kube-scheduler            0                   6f19048b3b21d       kube-scheduler-ingress-addon-legacy-316909
	833391434e1fa       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   72517a0c14645       kube-controller-manager-ingress-addon-legacy-316909
	6e0b8f2959380       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   197c37f783770       etcd-ingress-addon-legacy-316909
	
	* 
	* ==> containerd <==
	* Nov 08 23:48:10 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:10.396851921Z" level=info msg="StopPodSandbox for \"375e037b2cff06f5d6be7f74bcc7949829baa9a6b01bc6159046d43b83f72f99\" returns successfully"
	Nov 08 23:48:13 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:13.363015822Z" level=info msg="StopContainer for \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" with timeout 2 (s)"
	Nov 08 23:48:13 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:13.363677801Z" level=info msg="Stop container \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" with signal terminated"
	Nov 08 23:48:13 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:13.385291790Z" level=info msg="StopContainer for \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" with timeout 2 (s)"
	Nov 08 23:48:13 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:13.391070361Z" level=info msg="Skipping the sending of signal terminated to container \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" because a prior stop with timeout>0 request already sent the signal"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.386058979Z" level=info msg="Kill container \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.391273334Z" level=info msg="Kill container \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.474995692Z" level=info msg="shim disconnected" id=ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.475194821Z" level=warning msg="cleaning up after shim disconnected" id=ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141 namespace=k8s.io
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.475218386Z" level=info msg="cleaning up dead shim"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.485822867Z" level=warning msg="cleanup warnings time=\"2023-11-08T23:48:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4652 runtime=io.containerd.runc.v2\n"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.488323510Z" level=info msg="StopContainer for \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" returns successfully"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.488453954Z" level=info msg="StopContainer for \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" returns successfully"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.488994482Z" level=info msg="StopPodSandbox for \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.489056602Z" level=info msg="Container to stop \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.489285032Z" level=info msg="StopPodSandbox for \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.489324047Z" level=info msg="Container to stop \"ae54e2b9bae56f7e6d4bcbef28e0a42b077cae6b54abad3db158cabc41561141\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.525081762Z" level=info msg="shim disconnected" id=2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.525295226Z" level=warning msg="cleaning up after shim disconnected" id=2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2 namespace=k8s.io
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.525375185Z" level=info msg="cleaning up dead shim"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.536004510Z" level=warning msg="cleanup warnings time=\"2023-11-08T23:48:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4688 runtime=io.containerd.runc.v2\n"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.604806768Z" level=info msg="TearDown network for sandbox \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\" successfully"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.604861751Z" level=info msg="StopPodSandbox for \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\" returns successfully"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.610141066Z" level=info msg="TearDown network for sandbox \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\" successfully"
	Nov 08 23:48:15 ingress-addon-legacy-316909 containerd[830]: time="2023-11-08T23:48:15.610191076Z" level=info msg="StopPodSandbox for \"2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2\" returns successfully"
	
	* 
	* ==> coredns [3298cb0af2d5eb9510a2bbd3d17cf5e114b255e03b13da3a783fd3a977038a98] <==
	* [INFO] 10.244.0.5:55689 - 36426 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000276684s
	[INFO] 10.244.0.5:55689 - 20320 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001379983s
	[INFO] 10.244.0.5:52286 - 49241 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002397423s
	[INFO] 10.244.0.5:52286 - 37702 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010143256s
	[INFO] 10.244.0.5:55689 - 42612 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010234587s
	[INFO] 10.244.0.5:55689 - 57129 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000287851s
	[INFO] 10.244.0.5:52286 - 12196 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042905s
	[INFO] 10.244.0.5:43946 - 59687 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082363s
	[INFO] 10.244.0.5:49159 - 49322 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000311071s
	[INFO] 10.244.0.5:43946 - 1075 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062678s
	[INFO] 10.244.0.5:49159 - 2807 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049855s
	[INFO] 10.244.0.5:49159 - 13860 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032959s
	[INFO] 10.244.0.5:49159 - 9560 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033075s
	[INFO] 10.244.0.5:49159 - 4571 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008155s
	[INFO] 10.244.0.5:49159 - 12404 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047664s
	[INFO] 10.244.0.5:43946 - 44960 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047631s
	[INFO] 10.244.0.5:49159 - 20776 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001377931s
	[INFO] 10.244.0.5:43946 - 31287 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085447s
	[INFO] 10.244.0.5:43946 - 15210 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004576s
	[INFO] 10.244.0.5:43946 - 38689 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066436s
	[INFO] 10.244.0.5:49159 - 63040 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001227507s
	[INFO] 10.244.0.5:49159 - 20004 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005033s
	[INFO] 10.244.0.5:43946 - 46745 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045527s
	[INFO] 10.244.0.5:43946 - 1933 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000842926s
	[INFO] 10.244.0.5:43946 - 30106 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005449s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-316909
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-316909
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e
	                    minikube.k8s.io/name=ingress-addon-legacy-316909
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T23_46_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 23:46:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-316909
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 23:48:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 23:48:12 +0000   Wed, 08 Nov 2023 23:46:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 23:48:12 +0000   Wed, 08 Nov 2023 23:46:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 23:48:12 +0000   Wed, 08 Nov 2023 23:46:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 23:48:12 +0000   Wed, 08 Nov 2023 23:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-316909
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c498b332a594ae4b257a97c573deeee
	  System UUID:                af2fc493-00b4-4d82-b9fa-73b4408ce51a
	  Boot ID:                    34e87349-8f26-419b-8ec9-ff846a1986b6
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7x9vq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-66bff467f8-8z84m                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-ingress-addon-legacy-316909                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-x2bp8                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-316909             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-316909    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-xwfqm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-ingress-addon-legacy-316909             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 113s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-316909 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-316909 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] 'f53d5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=0000000030c5be65
	[  +0.001095] FS-Cache: N-key=[8] 'f53d5c0100000000'
	[  +0.002823] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000966] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=00000000e15b38e7
	[  +0.001101] FS-Cache: O-key=[8] 'f53d5c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=00000000b6e061f9
	[  +0.001109] FS-Cache: N-key=[8] 'f53d5c0100000000'
	[  +2.697196] FS-Cache: Duplicate cookie detected
	[  +0.000754] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=00000000813330c5
	[  +0.001161] FS-Cache: O-key=[8] 'f43d5c0100000000'
	[  +0.000723] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=0000000030c5be65
	[  +0.001061] FS-Cache: N-key=[8] 'f43d5c0100000000'
	[  +0.410647] FS-Cache: Duplicate cookie detected
	[  +0.000730] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001014] FS-Cache: O-cookie d=0000000032cbb99c{9p.inode} n=000000001a1e0a53
	[  +0.001044] FS-Cache: O-key=[8] 'fa3d5c0100000000'
	[  +0.000718] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=0000000032cbb99c{9p.inode} n=00000000e29ca3d4
	[  +0.001083] FS-Cache: N-key=[8] 'fa3d5c0100000000'
	
	* 
	* ==> etcd [6e0b8f29593808c2b09a2a786002764be4ebbcccfe983e076bda7778604e8532] <==
	* raft2023/11/08 23:46:29 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/08 23:46:29 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-08 23:46:29.821350 W | auth: simple token is not cryptographically signed
	2023-11-08 23:46:29.846038 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-08 23:46:29.849943 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/08 23:46:29 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-08 23:46:29.852085 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-08 23:46:29.853713 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-08 23:46:29.854043 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-08 23:46:29.854247 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/08 23:46:31 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/08 23:46:31 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/08 23:46:31 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/08 23:46:31 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/08 23:46:31 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-08 23:46:31.010257 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-08 23:46:31.010580 I | etcdserver: published {Name:ingress-addon-legacy-316909 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-08 23:46:31.010933 I | embed: ready to serve client requests
	2023-11-08 23:46:31.022159 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-08 23:46:31.036030 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-08 23:46:31.049473 I | embed: ready to serve client requests
	2023-11-08 23:46:31.166505 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-08 23:46:31.306376 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-08 23:46:32.997812 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:4" took too long (104.068843ms) to execute
	2023-11-08 23:46:33.033869 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 " with result "range_response_count:0 size:4" took too long (223.473972ms) to execute
	
	* 
	* ==> kernel <==
	*  23:48:21 up  6:30,  0 users,  load average: 1.04, 1.70, 1.66
	Linux ingress-addon-legacy-316909 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [0c32f47061e325454c695af95b44fafc024c03b1e6c0eb58a26bbc3d07f08572] <==
	* I1108 23:46:56.422339       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1108 23:46:56.422423       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1108 23:46:56.422551       1 main.go:116] setting mtu 1500 for CNI 
	I1108 23:46:56.422568       1 main.go:146] kindnetd IP family: "ipv4"
	I1108 23:46:56.422584       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1108 23:46:56.819163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:46:56.819208       1 main.go:227] handling current node
	I1108 23:47:06.925589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:06.925616       1 main.go:227] handling current node
	I1108 23:47:16.936923       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:16.936952       1 main.go:227] handling current node
	I1108 23:47:26.948955       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:26.948985       1 main.go:227] handling current node
	I1108 23:47:36.952494       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:36.952525       1 main.go:227] handling current node
	I1108 23:47:46.955703       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:46.955734       1 main.go:227] handling current node
	I1108 23:47:56.959957       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:47:56.959988       1 main.go:227] handling current node
	I1108 23:48:06.963863       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:48:06.963891       1 main.go:227] handling current node
	I1108 23:48:16.976125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1108 23:48:16.976155       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c7b33eb2a87c14fada246585ba8097d23941320338bb1c24133dcea404de9395] <==
	* I1108 23:46:35.530170       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	E1108 23:46:35.614278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1108 23:46:35.716979       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1108 23:46:35.717188       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 23:46:35.717288       1 cache.go:39] Caches are synced for autoregister controller
	I1108 23:46:35.726040       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1108 23:46:35.730831       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 23:46:36.514398       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1108 23:46:36.514481       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1108 23:46:36.522946       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1108 23:46:36.527466       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1108 23:46:36.527490       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1108 23:46:36.940902       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 23:46:37.056628       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1108 23:46:37.127055       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1108 23:46:37.128327       1 controller.go:609] quota admission added evaluator for: endpoints
	I1108 23:46:37.133698       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 23:46:37.951595       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1108 23:46:38.744093       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1108 23:46:38.864693       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1108 23:46:42.184203       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 23:46:53.633484       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1108 23:46:53.769246       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1108 23:47:16.416820       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1108 23:47:41.178613       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [833391434e1fa0eca3126c8bd822c6dfe809f45581104cc4222abd1de16ddbd6] <==
	* I1108 23:46:53.654307       1 shared_informer.go:230] Caches are synced for GC 
	I1108 23:46:53.654378       1 shared_informer.go:230] Caches are synced for job 
	I1108 23:46:53.654519       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1108 23:46:53.750719       1 shared_informer.go:230] Caches are synced for attach detach 
	I1108 23:46:53.765375       1 shared_informer.go:230] Caches are synced for deployment 
	I1108 23:46:53.779956       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"acc3a9d4-1bc9-492f-98a4-50458cfd39e3", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1108 23:46:53.794774       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39a65bca-9e86-436f-927d-8c6981f36e75", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-fg9sn
	I1108 23:46:53.823613       1 shared_informer.go:230] Caches are synced for disruption 
	I1108 23:46:53.823638       1 disruption.go:339] Sending events to api server.
	I1108 23:46:53.829428       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39a65bca-9e86-436f-927d-8c6981f36e75", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8z84m
	I1108 23:46:53.917846       1 shared_informer.go:230] Caches are synced for resource quota 
	I1108 23:46:53.917943       1 shared_informer.go:230] Caches are synced for resource quota 
	I1108 23:46:53.990098       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"acc3a9d4-1bc9-492f-98a4-50458cfd39e3", APIVersion:"apps/v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1108 23:46:54.007593       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1108 23:46:54.007625       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1108 23:46:54.016724       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1108 23:46:54.071144       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39a65bca-9e86-436f-927d-8c6981f36e75", APIVersion:"apps/v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-fg9sn
	I1108 23:47:16.411170       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f118caa4-43a5-4c8b-beae-6cf0431ad352", APIVersion:"apps/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1108 23:47:16.433111       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"db3af581-64d4-4903-b5bc-b185fc7de02b", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-vm6kf
	I1108 23:47:16.435713       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b68fe905-f7d0-4511-bc2b-20b1fa3b3435", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-rhsn9
	I1108 23:47:16.559819       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"45d6aab3-a780-40ed-b779-3634771ab916", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-7f62r
	I1108 23:47:19.533358       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b68fe905-f7d0-4511-bc2b-20b1fa3b3435", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1108 23:47:19.561635       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"45d6aab3-a780-40ed-b779-3634771ab916", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1108 23:47:51.909046       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"444c7962-574b-4f51-a1cb-a52bea840562", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1108 23:47:51.921199       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"58951ef4-90f6-42ae-9aa4-96edc4ee2f01", APIVersion:"apps/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7x9vq
	
	* 
	* ==> kube-proxy [a696f4e36dcfaa2da4c84b717cfc89e77947c178472af10e2ddf9156a281bfb7] <==
	* W1108 23:46:54.789001       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1108 23:46:54.815238       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1108 23:46:54.815483       1 server_others.go:186] Using iptables Proxier.
	I1108 23:46:54.821868       1 server.go:583] Version: v1.18.20
	I1108 23:46:54.822751       1 config.go:315] Starting service config controller
	I1108 23:46:54.822804       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1108 23:46:54.824853       1 config.go:133] Starting endpoints config controller
	I1108 23:46:54.824865       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1108 23:46:54.927966       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1108 23:46:54.928061       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a0a61502ad1eadffd1a51ad8e5086615a6e7c2ff6f8bb5f44e577f193d01f374] <==
	* W1108 23:46:35.669775       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 23:46:35.669869       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 23:46:35.669956       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 23:46:35.720791       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1108 23:46:35.720883       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1108 23:46:35.722717       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:46:35.722763       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:46:35.723152       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1108 23:46:35.723252       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1108 23:46:35.729996       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 23:46:35.730355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:46:35.730517       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 23:46:35.730645       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 23:46:35.730771       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:46:35.731021       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:46:35.731428       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 23:46:35.731540       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 23:46:35.732127       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:46:35.732189       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:46:35.732323       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 23:46:35.732380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:46:36.609333       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:46:36.679256       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 23:46:36.881979       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 23:46:38.823116       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 08 23:47:56 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:47:56.674824    1649 pod_workers.go:191] Error syncing pod 154ca1ee-f88a-4d25-8661-22ddaf59b5bd ("hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"
	Nov 08 23:47:57 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:47:57.678176    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dc9f95a2dcccb6f37fe69ea1bc86c56dc0dd4321240330ee1c81613c11ec5b93
	Nov 08 23:47:57 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:47:57.678444    1649 pod_workers.go:191] Error syncing pod 154ca1ee-f88a-4d25-8661-22ddaf59b5bd ("hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"
	Nov 08 23:48:00 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:00.392447    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 88f0095961bdb8a9795b9ef9871d3f305222d6ace4cbb81c9f080177b6e2fd23
	Nov 08 23:48:00 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:48:00.392808    1649 pod_workers.go:191] Error syncing pod e36788ef-b02b-46d5-a2c9-ea3a87668952 ("kube-ingress-dns-minikube_kube-system(e36788ef-b02b-46d5-a2c9-ea3a87668952)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e36788ef-b02b-46d5-a2c9-ea3a87668952)"
	Nov 08 23:48:07 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:07.828213    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-qss8q" (UniqueName: "kubernetes.io/secret/e36788ef-b02b-46d5-a2c9-ea3a87668952-minikube-ingress-dns-token-qss8q") pod "e36788ef-b02b-46d5-a2c9-ea3a87668952" (UID: "e36788ef-b02b-46d5-a2c9-ea3a87668952")
	Nov 08 23:48:07 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:07.834697    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36788ef-b02b-46d5-a2c9-ea3a87668952-minikube-ingress-dns-token-qss8q" (OuterVolumeSpecName: "minikube-ingress-dns-token-qss8q") pod "e36788ef-b02b-46d5-a2c9-ea3a87668952" (UID: "e36788ef-b02b-46d5-a2c9-ea3a87668952"). InnerVolumeSpecName "minikube-ingress-dns-token-qss8q". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 08 23:48:07 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:07.928583    1649 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-qss8q" (UniqueName: "kubernetes.io/secret/e36788ef-b02b-46d5-a2c9-ea3a87668952-minikube-ingress-dns-token-qss8q") on node "ingress-addon-legacy-316909" DevicePath ""
	Nov 08 23:48:08 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:08.699463    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 88f0095961bdb8a9795b9ef9871d3f305222d6ace4cbb81c9f080177b6e2fd23
	Nov 08 23:48:09 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:09.392315    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dc9f95a2dcccb6f37fe69ea1bc86c56dc0dd4321240330ee1c81613c11ec5b93
	Nov 08 23:48:09 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:09.703141    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dc9f95a2dcccb6f37fe69ea1bc86c56dc0dd4321240330ee1c81613c11ec5b93
	Nov 08 23:48:09 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:09.703475    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 54c5680217c051ef3a16bd8ba07abde919b5b16feee874b82486e8e1b922a06e
	Nov 08 23:48:09 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:48:09.703728    1649 pod_workers.go:191] Error syncing pod 154ca1ee-f88a-4d25-8661-22ddaf59b5bd ("hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"
	Nov 08 23:48:13 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:48:13.366548    1649 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vm6kf.1795cab065284e2f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vm6kf", UID:"dd3d3da5-3c85-407c-8dee-7e752bc24f13", APIVersion:"v1", ResourceVersion:"491", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-316909"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14b256f559a2c2f, ext:94677426962, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14b256f559a2c2f, ext:94677426962, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vm6kf.1795cab065284e2f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 08 23:48:13 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:48:13.392965    1649 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vm6kf.1795cab065284e2f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vm6kf", UID:"dd3d3da5-3c85-407c-8dee-7e752bc24f13", APIVersion:"v1", ResourceVersion:"491", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-316909"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14b256f559a2c2f, ext:94677426962, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14b256f56e4c304, ext:94699092455, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vm6kf.1795cab065284e2f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 08 23:48:15 ingress-addon-legacy-316909 kubelet[1649]: W1108 23:48:15.721052    1649 pod_container_deletor.go:77] Container "2eec13862a2ad1c4c7c81ee5a9b021fdcd4f7a4e2c613ec9fa11832154562bb2" not found in pod's containers
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.454980    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-csvvg" (UniqueName: "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-ingress-nginx-token-csvvg") pod "dd3d3da5-3c85-407c-8dee-7e752bc24f13" (UID: "dd3d3da5-3c85-407c-8dee-7e752bc24f13")
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.458225    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-webhook-cert") pod "dd3d3da5-3c85-407c-8dee-7e752bc24f13" (UID: "dd3d3da5-3c85-407c-8dee-7e752bc24f13")
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.461567    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-ingress-nginx-token-csvvg" (OuterVolumeSpecName: "ingress-nginx-token-csvvg") pod "dd3d3da5-3c85-407c-8dee-7e752bc24f13" (UID: "dd3d3da5-3c85-407c-8dee-7e752bc24f13"). InnerVolumeSpecName "ingress-nginx-token-csvvg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.462284    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dd3d3da5-3c85-407c-8dee-7e752bc24f13" (UID: "dd3d3da5-3c85-407c-8dee-7e752bc24f13"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.558888    1649 reconciler.go:319] Volume detached for volume "ingress-nginx-token-csvvg" (UniqueName: "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-ingress-nginx-token-csvvg") on node "ingress-addon-legacy-316909" DevicePath ""
	Nov 08 23:48:17 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:17.558942    1649 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dd3d3da5-3c85-407c-8dee-7e752bc24f13-webhook-cert") on node "ingress-addon-legacy-316909" DevicePath ""
	Nov 08 23:48:18 ingress-addon-legacy-316909 kubelet[1649]: W1108 23:48:18.398912    1649 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/dd3d3da5-3c85-407c-8dee-7e752bc24f13/volumes" does not exist
	Nov 08 23:48:21 ingress-addon-legacy-316909 kubelet[1649]: I1108 23:48:21.392305    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 54c5680217c051ef3a16bd8ba07abde919b5b16feee874b82486e8e1b922a06e
	Nov 08 23:48:21 ingress-addon-legacy-316909 kubelet[1649]: E1108 23:48:21.392589    1649 pod_workers.go:191] Error syncing pod 154ca1ee-f88a-4d25-8661-22ddaf59b5bd ("hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7x9vq_default(154ca1ee-f88a-4d25-8661-22ddaf59b5bd)"
	
	* 
	* ==> storage-provisioner [e69bad241d6a85b78da2ecb18e45fd59b60de93fa926f0a8df953c0a79986bdd] <==
	* I1108 23:46:58.021048       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 23:46:58.034357       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 23:46:58.034448       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 23:46:58.042547       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 23:46:58.042974       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"045d017f-e611-4d82-9efe-272d9f355605", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-316909_7af82e8b-af6a-4329-b88d-9941b7f631dd became leader
	I1108 23:46:58.043032       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-316909_7af82e8b-af6a-4329-b88d-9941b7f631dd!
	I1108 23:46:58.143733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-316909_7af82e8b-af6a-4329-b88d-9941b7f631dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-316909 -n ingress-addon-legacy-316909
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-316909 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (137.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-881977 --alsologtostderr -v=3
E1109 00:22:25.286719  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-arm64 stop -p no-preload-881977 --alsologtostderr -v=3: exit status 82 (2m15.375887584s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-881977"  ...
	* Powering off "no-preload-881977" via SSH ...
	* Stopping node "no-preload-881977"  ...
	* Powering off "no-preload-881977" via SSH ...
	* Stopping node "no-preload-881977"  ...
	* Powering off "no-preload-881977" via SSH ...
	* Stopping node "no-preload-881977"  ...
	* Powering off "no-preload-881977" via SSH ...
	* Stopping node "no-preload-881977"  ...
	* Powering off "no-preload-881977" via SSH ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 00:20:27.655939  918798 out.go:296] Setting OutFile to fd 1 ...
	I1109 00:20:27.656129  918798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:20:27.656156  918798 out.go:309] Setting ErrFile to fd 2...
	I1109 00:20:27.656176  918798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:20:27.656470  918798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1109 00:20:27.656766  918798 out.go:303] Setting JSON to false
	I1109 00:20:27.656923  918798 mustload.go:65] Loading cluster: no-preload-881977
	I1109 00:20:27.657377  918798 config.go:182] Loaded profile config "no-preload-881977": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:20:27.658101  918798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/config.json ...
	I1109 00:20:27.658353  918798 mustload.go:65] Loading cluster: no-preload-881977
	I1109 00:20:27.658526  918798 config.go:182] Loaded profile config "no-preload-881977": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:20:27.658587  918798 stop.go:39] StopHost: no-preload-881977
	I1109 00:20:27.661701  918798 out.go:177] * Stopping node "no-preload-881977"  ...
	I1109 00:20:27.663381  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:27.687250  918798 out.go:177] * Powering off "no-preload-881977" via SSH ...
	I1109 00:20:27.689535  918798 cli_runner.go:164] Run: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0"
	I1109 00:20:28.844960  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:28.866078  918798 oci.go:664] temporary error: container no-preload-881977 status is Running but expect it to be exited
	I1109 00:20:28.866136  918798 oci.go:670] Successfully shutdown container no-preload-881977
	I1109 00:20:28.866143  918798 stop.go:88] shutdown container: err=<nil>
	I1109 00:20:28.866194  918798 main.go:141] libmachine: Stopping "no-preload-881977"...
	I1109 00:20:28.866284  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:28.884820  918798 kic_runner.go:93] Run: systemctl --version
	I1109 00:20:28.884841  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 systemctl --version]
	I1109 00:20:28.962024  918798 kic_runner.go:93] Run: sudo systemctl stop kubelet
	I1109 00:20:28.962045  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop kubelet]
	W1109 00:20:29.063216  918798 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo systemctl stop kubelet: exit status 1
	stdout:
	
	stderr:
	sudo: unable to resolve host no-preload-881977: Temporary failure in name resolution
	Failed to connect to bus: No such file or directory
	I1109 00:20:29.063318  918798 kic_runner.go:93] Run: sudo systemctl stop -f kubelet
	I1109 00:20:29.063333  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop -f kubelet]
	W1109 00:20:29.143051  918798 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo systemctl stop -f kubelet: exit status 1
	stdout:
	
	stderr:
	sudo: unable to resolve host no-preload-881977: Temporary failure in name resolution
	Failed to connect to bus: No such file or directory
	I1109 00:20:29.143104  918798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1109 00:20:29.143206  918798 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 00:20:29.143221  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I1109 00:20:29.312909  918798 kic.go:466] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	sudo: unable to resolve host no-preload-881977: Temporary failure in name resolution
	time="2023-11-09T00:20:29Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2023-11-09T00:20:29Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2023-11-09T00:20:29Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2023-11-09T00:20:29Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1109 00:20:29.312954  918798 kic.go:476] successfully stopped kubernetes!
	I1109 00:20:29.313026  918798 kic_runner.go:93] Run: pgrep kube-apiserver
	I1109 00:20:29.313040  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 pgrep kube-apiserver]
	I1109 00:20:53.443829  918798 stop.go:59] stop err: stopping no-preload-881977: exit status 1
	W1109 00:20:53.443870  918798 stop.go:163] stop host returned error: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:20:53.443904  918798 retry.go:31] will retry after 1.205455673s: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:20:54.650216  918798 stop.go:39] StopHost: no-preload-881977
	I1109 00:20:54.652507  918798 out.go:177] * Stopping node "no-preload-881977"  ...
	I1109 00:20:54.654455  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:54.684330  918798 out.go:177] * Powering off "no-preload-881977" via SSH ...
	I1109 00:20:54.686415  918798 cli_runner.go:164] Run: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0"
	W1109 00:20:54.745250  918798 cli_runner.go:211] docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0" returned with exit code 126
	I1109 00:20:54.745298  918798 oci.go:650] error shutdown no-preload-881977: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:20:55.746173  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:55.766584  918798 oci.go:664] temporary error: container no-preload-881977 status is Running but expect it to be exited
	I1109 00:20:55.766615  918798 oci.go:670] Successfully shutdown container no-preload-881977
	I1109 00:20:55.766623  918798 stop.go:88] shutdown container: err=<nil>
	I1109 00:20:55.766642  918798 main.go:141] libmachine: Stopping "no-preload-881977"...
	I1109 00:20:55.766724  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:20:55.787297  918798 kic_runner.go:93] Run: sudo systemctl stop kubelet
	I1109 00:20:55.787321  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop kubelet]
	W1109 00:20:55.848618  918798 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo systemctl stop kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:20:55.848709  918798 kic_runner.go:93] Run: sudo systemctl stop -f kubelet
	I1109 00:20:55.848725  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop -f kubelet]
	W1109 00:20:55.907034  918798 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo systemctl stop -f kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:20:55.907073  918798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1109 00:20:55.907165  918798 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 00:20:55.907179  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I1109 00:20:55.962460  918798 kic.go:466] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:20:55.962486  918798 kic.go:476] successfully stopped kubernetes!
	I1109 00:20:55.962562  918798 kic_runner.go:93] Run: pgrep kube-apiserver
	I1109 00:20:55.962579  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 pgrep kube-apiserver]
	I1109 00:21:20.094814  918798 stop.go:59] stop err: stopping no-preload-881977: exit status 1
	W1109 00:21:20.094853  918798 stop.go:163] stop host returned error: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:21:20.094872  918798 retry.go:31] will retry after 1.429009643s: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:21:21.524033  918798 stop.go:39] StopHost: no-preload-881977
	I1109 00:21:21.526408  918798 out.go:177] * Stopping node "no-preload-881977"  ...
	I1109 00:21:21.528201  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:21.549006  918798 out.go:177] * Powering off "no-preload-881977" via SSH ...
	I1109 00:21:21.550614  918798 cli_runner.go:164] Run: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0"
	W1109 00:21:21.598250  918798 cli_runner.go:211] docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0" returned with exit code 126
	I1109 00:21:21.598287  918798 oci.go:650] error shutdown no-preload-881977: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:22.599012  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:22.617608  918798 oci.go:664] temporary error: container no-preload-881977 status is Running but expect it to be exited
	I1109 00:21:22.617644  918798 oci.go:670] Successfully shutdown container no-preload-881977
	I1109 00:21:22.617651  918798 stop.go:88] shutdown container: err=<nil>
	I1109 00:21:22.617670  918798 main.go:141] libmachine: Stopping "no-preload-881977"...
	I1109 00:21:22.617750  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:22.637386  918798 kic_runner.go:93] Run: sudo systemctl stop kubelet
	I1109 00:21:22.637410  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop kubelet]
	W1109 00:21:22.681753  918798 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo systemctl stop kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:22.681858  918798 kic_runner.go:93] Run: sudo systemctl stop -f kubelet
	I1109 00:21:22.681876  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop -f kubelet]
	W1109 00:21:22.739254  918798 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo systemctl stop -f kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:22.739280  918798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1109 00:21:22.739390  918798 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 00:21:22.739398  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I1109 00:21:22.798270  918798 kic.go:466] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:22.798293  918798 kic.go:476] successfully stopped kubernetes!
	I1109 00:21:22.798370  918798 kic_runner.go:93] Run: pgrep kube-apiserver
	I1109 00:21:22.798379  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 pgrep kube-apiserver]
	I1109 00:21:46.917681  918798 stop.go:59] stop err: stopping no-preload-881977: exit status 1
	W1109 00:21:46.917719  918798 stop.go:163] stop host returned error: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:21:46.917739  918798 retry.go:31] will retry after 1.846325574s: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:21:48.764489  918798 stop.go:39] StopHost: no-preload-881977
	I1109 00:21:48.766749  918798 out.go:177] * Stopping node "no-preload-881977"  ...
	I1109 00:21:48.768958  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:48.790884  918798 out.go:177] * Powering off "no-preload-881977" via SSH ...
	I1109 00:21:48.792916  918798 cli_runner.go:164] Run: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0"
	W1109 00:21:48.842554  918798 cli_runner.go:211] docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0" returned with exit code 126
	I1109 00:21:48.842590  918798 oci.go:650] error shutdown no-preload-881977: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:49.842761  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:49.864600  918798 oci.go:664] temporary error: container no-preload-881977 status is Running but expect it to be exited
	I1109 00:21:49.864631  918798 oci.go:670] Successfully shutdown container no-preload-881977
	I1109 00:21:49.864638  918798 stop.go:88] shutdown container: err=<nil>
	I1109 00:21:49.864656  918798 main.go:141] libmachine: Stopping "no-preload-881977"...
	I1109 00:21:49.864733  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:21:49.884650  918798 kic_runner.go:93] Run: sudo systemctl stop kubelet
	I1109 00:21:49.884675  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop kubelet]
	W1109 00:21:49.944472  918798 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo systemctl stop kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:49.944571  918798 kic_runner.go:93] Run: sudo systemctl stop -f kubelet
	I1109 00:21:49.944585  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop -f kubelet]
	W1109 00:21:50.002256  918798 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo systemctl stop -f kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:50.002304  918798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1109 00:21:50.002401  918798 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 00:21:50.002417  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I1109 00:21:50.076220  918798 kic.go:466] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:21:50.076245  918798 kic.go:476] successfully stopped kubernetes!
	I1109 00:21:50.076325  918798 kic_runner.go:93] Run: pgrep kube-apiserver
	I1109 00:21:50.076333  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 pgrep kube-apiserver]
	I1109 00:22:14.235868  918798 stop.go:59] stop err: stopping no-preload-881977: exit status 1
	W1109 00:22:14.235904  918798 stop.go:163] stop host returned error: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:22:14.235923  918798 retry.go:31] will retry after 3.034914653s: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:22:17.271002  918798 stop.go:39] StopHost: no-preload-881977
	I1109 00:22:17.273214  918798 out.go:177] * Stopping node "no-preload-881977"  ...
	I1109 00:22:17.275402  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:22:17.300786  918798 out.go:177] * Powering off "no-preload-881977" via SSH ...
	I1109 00:22:17.303158  918798 cli_runner.go:164] Run: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0"
	W1109 00:22:17.361826  918798 cli_runner.go:211] docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0" returned with exit code 126
	I1109 00:22:17.361872  918798 oci.go:650] error shutdown no-preload-881977: docker exec --privileged -t no-preload-881977 /bin/bash -c "sudo init 0": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:22:18.363031  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:22:18.382112  918798 oci.go:664] temporary error: container no-preload-881977 status is Running but expect it to be exited
	I1109 00:22:18.382143  918798 oci.go:670] Successfully shutdown container no-preload-881977
	I1109 00:22:18.382150  918798 stop.go:88] shutdown container: err=<nil>
	I1109 00:22:18.382169  918798 main.go:141] libmachine: Stopping "no-preload-881977"...
	I1109 00:22:18.382246  918798 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:22:18.401564  918798 kic_runner.go:93] Run: sudo systemctl stop kubelet
	I1109 00:22:18.401587  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop kubelet]
	W1109 00:22:18.455117  918798 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo systemctl stop kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:22:18.455213  918798 kic_runner.go:93] Run: sudo systemctl stop -f kubelet
	I1109 00:22:18.455223  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo systemctl stop -f kubelet]
	W1109 00:22:18.510329  918798 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo systemctl stop -f kubelet: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:22:18.510360  918798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1109 00:22:18.510443  918798 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 00:22:18.510456  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I1109 00:22:18.568944  918798 kic.go:466] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown
	
	stderr:
	I1109 00:22:18.568968  918798 kic.go:476] successfully stopped kubernetes!
	I1109 00:22:18.569054  918798 kic_runner.go:93] Run: pgrep kube-apiserver
	I1109 00:22:18.569063  918798 kic_runner.go:114] Args: [docker exec --privileged no-preload-881977 pgrep kube-apiserver]
	I1109 00:22:42.677078  918798 stop.go:59] stop err: stopping no-preload-881977: exit status 1
	W1109 00:22:42.677111  918798 stop.go:163] stop host returned error: Temporary Error: stop: stopping no-preload-881977: exit status 1
	I1109 00:22:42.679107  918798 out.go:177] 
	W1109 00:22:42.680914  918798 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: stopping no-preload-881977: exit status 1
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: stopping no-preload-881977: exit status 1
	W1109 00:22:42.680963  918798 out.go:239] * 
	* 
	W1109 00:22:42.946683  918798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 00:22:42.948689  918798 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-arm64 stop -p no-preload-881977 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-881977
helpers_test.go:235: (dbg) docker inspect no-preload-881977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da",
	        "Created": "2023-11-09T00:18:52.867772019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T00:18:53.523413944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hosts",
	        "LogPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da-json.log",
	        "Name": "/no-preload-881977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-881977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-881977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-881977",
	                "Source": "/var/lib/docker/volumes/no-preload-881977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-881977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-881977",
	                "name.minikube.sigs.k8s.io": "no-preload-881977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3350f4a343f5fbbf1a16ea28cd8d9da2fc351f6b6c3d0a1efb567f98ee875ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3350f4a343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-881977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3fdef2a329f9",
	                        "no-preload-881977"
	                    ],
	                    "NetworkID": "e7538c09064c9e298d1f44de0c17bc2360049aac006e98bc362815afc93902a4",
	                    "EndpointID": "e2d65eb4eb638daf141194c616c3f8b2dd3573204396989aab5fd29ba8bf9a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (2.088880254s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:22:45.050057  919976 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35902->127.0.0.1:33939: read: connection reset by peer
	E1109 00:22:45.050074  919976 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35902->127.0.0.1:33939: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-881977" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (137.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (1.980370918s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:22:47.042387  920002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E1109 00:22:47.042407  920002 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-881977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1109 00:22:49.190572  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p no-preload-881977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (4.668140323s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36078->127.0.0.1:33939: read: connection reset by peer
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable dashboard -p no-preload-881977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-881977
helpers_test.go:235: (dbg) docker inspect no-preload-881977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da",
	        "Created": "2023-11-09T00:18:52.867772019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T00:18:53.523413944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hosts",
	        "LogPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da-json.log",
	        "Name": "/no-preload-881977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-881977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-881977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-881977",
	                "Source": "/var/lib/docker/volumes/no-preload-881977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-881977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-881977",
	                "name.minikube.sigs.k8s.io": "no-preload-881977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3350f4a343f5fbbf1a16ea28cd8d9da2fc351f6b6c3d0a1efb567f98ee875ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3350f4a343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-881977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3fdef2a329f9",
	                        "no-preload-881977"
	                    ],
	                    "NetworkID": "e7538c09064c9e298d1f44de0c17bc2360049aac006e98bc362815afc93902a4",
	                    "EndpointID": "e2d65eb4eb638daf141194c616c3f8b2dd3573204396989aab5fd29ba8bf9a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (3.018668576s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:22:54.753160  920096 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E1109 00:22:54.753175  920096 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-881977" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (974.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-881977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:23:00.824261  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:24:46.145494  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1109 00:25:28.338273  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1109 00:27:25.286301  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1109 00:28:00.825057  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:29:46.145212  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-881977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: exit status 80 (16m10.902120798s)

                                                
                                                
-- stdout --
	* [no-preload-881977] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-881977 in cluster no-preload-881977
	* Pulling base image ...
	* Updating the running docker "no-preload-881977" container ...
	* Updating the running docker "no-preload-881977" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 00:22:54.817587  920123 out.go:296] Setting OutFile to fd 1 ...
	I1109 00:22:54.817794  920123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:22:54.817821  920123 out.go:309] Setting ErrFile to fd 2...
	I1109 00:22:54.817840  920123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:22:54.818181  920123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1109 00:22:54.818597  920123 out.go:303] Setting JSON to false
	I1109 00:22:54.819889  920123 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25524,"bootTime":1699463851,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1109 00:22:54.819993  920123 start.go:138] virtualization:  
	I1109 00:22:54.822490  920123 out.go:177] * [no-preload-881977] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 00:22:54.824525  920123 out.go:177]   - MINIKUBE_LOCATION=17586
	I1109 00:22:54.826170  920123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 00:22:54.824685  920123 notify.go:220] Checking for updates...
	I1109 00:22:54.828055  920123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1109 00:22:54.829722  920123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1109 00:22:54.831388  920123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 00:22:54.833019  920123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 00:22:54.835096  920123 config.go:182] Loaded profile config "no-preload-881977": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:22:54.835757  920123 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 00:22:54.875628  920123 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 00:22:54.875736  920123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 00:22:54.961754  920123 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:63 SystemTime:2023-11-09 00:22:54.951471832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 00:22:54.961854  920123 docker.go:295] overlay module found
	I1109 00:22:54.965735  920123 out.go:177] * Using the docker driver based on existing profile
	I1109 00:22:54.967479  920123 start.go:298] selected driver: docker
	I1109 00:22:54.967497  920123 start.go:902] validating driver "docker" against &{Name:no-preload-881977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-881977 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 00:22:54.967609  920123 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 00:22:54.968267  920123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 00:22:55.068087  920123 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:63 SystemTime:2023-11-09 00:22:55.056732637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 00:22:55.068487  920123 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 00:22:55.068520  920123 cni.go:84] Creating CNI manager for ""
	I1109 00:22:55.068529  920123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1109 00:22:55.068544  920123 start_flags.go:323] config:
	{Name:no-preload-881977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-881977 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 00:22:55.070821  920123 out.go:177] * Starting control plane node no-preload-881977 in cluster no-preload-881977
	I1109 00:22:55.072473  920123 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1109 00:22:55.074534  920123 out.go:177] * Pulling base image ...
	I1109 00:22:55.076176  920123 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1109 00:22:55.076345  920123 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/config.json ...
	I1109 00:22:55.076719  920123 cache.go:107] acquiring lock: {Name:mk497c7c1332cb491622525c96c4b9342bc94291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.076814  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 00:22:55.076831  920123 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.202µs
	I1109 00:22:55.076841  920123 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 00:22:55.076855  920123 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1109 00:22:55.076974  920123 cache.go:107] acquiring lock: {Name:mkd6dafb810abace8f8e0cc5725999979a60f981 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077028  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 exists
	I1109 00:22:55.077040  920123 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3" took 73.698µs
	I1109 00:22:55.077050  920123 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
	I1109 00:22:55.077061  920123 cache.go:107] acquiring lock: {Name:mk51e3602436afb853972b81bcaffbe0417e5c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077099  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 exists
	I1109 00:22:55.077110  920123 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3" took 50.618µs
	I1109 00:22:55.077122  920123 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
	I1109 00:22:55.077136  920123 cache.go:107] acquiring lock: {Name:mke11bf9adc1ec829efd8c6b9a3f2c1469362fba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077166  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
	I1109 00:22:55.077176  920123 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3" took 42.018µs
	I1109 00:22:55.077183  920123 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
	I1109 00:22:55.077194  920123 cache.go:107] acquiring lock: {Name:mk295d0ba417a5c688401c021a219d7fd24ffa4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077225  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 exists
	I1109 00:22:55.077230  920123 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3" took 36.283µs
	I1109 00:22:55.077243  920123 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
	I1109 00:22:55.077253  920123 cache.go:107] acquiring lock: {Name:mk1c6af950d69c7c2dc824f6fccd64c316849b87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077289  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I1109 00:22:55.077296  920123 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 43.61µs
	I1109 00:22:55.077302  920123 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1109 00:22:55.077317  920123 cache.go:107] acquiring lock: {Name:mkdd8a642888879e76bac07c72d55bd8fbc4dd61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077348  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1109 00:22:55.077359  920123 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 43.955µs
	I1109 00:22:55.077366  920123 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1109 00:22:55.077377  920123 cache.go:107] acquiring lock: {Name:mkf0dbb3d5f51e59ca870829e55fdd9ba1c86023 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.077415  920123 cache.go:115] /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1109 00:22:55.077424  920123 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 48.131µs
	I1109 00:22:55.077519  920123 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1109 00:22:55.077534  920123 cache.go:87] Successfully saved all images to host disk.
	I1109 00:22:55.098723  920123 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1109 00:22:55.098752  920123 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1109 00:22:55.098774  920123 cache.go:194] Successfully downloaded all kic artifacts
	I1109 00:22:55.098803  920123 start.go:365] acquiring machines lock for no-preload-881977: {Name:mk3b964979021e50618b8ac49e6dc994101d0e99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:22:55.098871  920123 start.go:369] acquired machines lock for "no-preload-881977" in 46.745µs
	I1109 00:22:55.098914  920123 start.go:96] Skipping create...Using existing machine configuration
	I1109 00:22:55.098923  920123 fix.go:54] fixHost starting: 
	I1109 00:22:55.099282  920123 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:22:55.123458  920123 fix.go:102] recreateIfNeeded on no-preload-881977: state=Running err=<nil>
	W1109 00:22:55.123491  920123 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 00:22:55.125687  920123 out.go:177] * Updating the running docker "no-preload-881977" container ...
	I1109 00:22:55.127425  920123 machine.go:88] provisioning docker machine ...
	I1109 00:22:55.127471  920123 ubuntu.go:169] provisioning hostname "no-preload-881977"
	I1109 00:22:55.127554  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:22:55.147911  920123 main.go:141] libmachine: Using SSH client type: native
	I1109 00:22:55.148380  920123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33939 <nil> <nil>}
	I1109 00:22:55.148403  920123 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-881977 && echo "no-preload-881977" | sudo tee /etc/hostname
	I1109 00:22:55.148825  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37232->127.0.0.1:33939: read: connection reset by peer
	I1109 00:22:58.150007  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37234->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:01.151093  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37240->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:04.151821  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54850->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:07.153003  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54856->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:10.153830  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54860->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:13.154511  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47508->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:16.155229  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47510->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:19.155839  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47526->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:22.158086  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60818->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:25.158712  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60834->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:28.159340  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60850->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:31.160032  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60862->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:34.160769  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46138->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:37.162125  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46146->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:40.163579  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46162->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:43.166190  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54700->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:46.166788  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54702->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:49.167502  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54716->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:52.169889  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52432->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:55.170624  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52442->127.0.0.1:33939: read: connection reset by peer
	I1109 00:23:58.173331  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52448->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:01.174188  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52462->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:04.176905  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44774->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:07.178882  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44782->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:10.179665  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44796->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:13.182408  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46036->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:16.183719  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:24:19.186020  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46056->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:22.186632  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:24:25.188135  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36242->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:28.190046  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36246->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:31.190697  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36250->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:34.192913  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42136->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:37.194231  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42138->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:40.195491  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42148->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:43.198059  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35590->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:46.198725  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:24:49.201363  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35614->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:52.202108  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33962->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:55.202892  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33968->127.0.0.1:33939: read: connection reset by peer
	I1109 00:24:58.203913  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:01.204592  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:04.206045  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42968->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:07.206692  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42972->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:10.207835  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:13.208501  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41494->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:16.209161  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41510->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:19.209971  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41518->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:22.211980  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33184->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:25.212571  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:28.214658  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33206->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:31.215378  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33220->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:34.216951  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37296->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:37.217653  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:40.218328  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37316->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:43.219733  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:46.220413  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:49.221647  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:52.224376  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60746->127.0.0.1:33939: read: connection reset by peer
	I1109 00:25:55.224511  920123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 00:25:55.224641  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:25:55.248708  920123 main.go:141] libmachine: Using SSH client type: native
	I1109 00:25:55.249131  920123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33939 <nil> <nil>}
	I1109 00:25:55.249155  920123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-881977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-881977/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-881977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 00:25:55.249636  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:25:58.250343  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60760->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:01.251020  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:04.251703  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56746->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:07.254676  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56750->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:10.255341  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56760->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:13.256036  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:16.256775  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59664->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:19.259404  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59666->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:22.260079  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51824->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:25.260781  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51830->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:28.261464  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51838->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:31.262377  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51842->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:34.264991  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49626->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:37.266020  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:40.266586  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:43.267548  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:46.268195  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34816->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:49.270784  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34824->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:52.271890  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:26:55.272885  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41892->127.0.0.1:33939: read: connection reset by peer
	I1109 00:26:58.273997  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41902->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:01.274632  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:04.275371  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:07.277403  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42438->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:10.278855  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42444->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:13.280486  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:16.281731  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43218->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:19.284299  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:22.285960  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:25.287257  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34144->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:28.289944  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34150->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:31.291875  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34160->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:34.292559  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37590->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:37.294640  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:40.295360  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:43.296094  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43022->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:46.297928  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43028->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:49.300502  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:27:52.301167  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46068->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:55.301880  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46084->127.0.0.1:33939: read: connection reset by peer
	I1109 00:27:58.303213  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46090->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:01.303960  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46094->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:04.304746  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:28:07.306013  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34964->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:10.306588  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:28:13.308031  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39268->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:16.308664  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39284->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:19.309909  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39292->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:22.310467  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48222->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:25.311122  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48234->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:28.313623  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48236->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:31.314906  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48252->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:34.316912  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41446->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:37.317724  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41456->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:40.318653  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41470->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:43.320215  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58800->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:46.321656  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58812->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:49.322710  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58824->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:52.325546  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59664->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:55.325724  920123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 00:28:55.325754  920123 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17586-749551/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-749551/.minikube}
	I1109 00:28:55.325780  920123 ubuntu.go:177] setting up certificates
	I1109 00:28:55.325793  920123 provision.go:83] configureAuth start
	I1109 00:28:55.325865  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:28:55.344582  920123 provision.go:138] copyHostCerts
	I1109 00:28:55.344662  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:28:55.344695  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:28:55.344778  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:28:55.344886  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:28:55.344895  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:28:55.344922  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:28:55.344980  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:28:55.344989  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:28:55.345012  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:28:55.345062  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:28:55.877310  920123 provision.go:172] copyRemoteCerts
	I1109 00:28:55.877383  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:28:55.877429  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:28:55.896160  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:28:55.897092  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:28:55.897115  920123 retry.go:31] will retry after 264.417682ms: ssh: handshake failed: EOF
	W1109 00:28:56.163133  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59692->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:56.163162  920123 retry.go:31] will retry after 539.723549ms: ssh: handshake failed: read tcp 127.0.0.1:59692->127.0.0.1:33939: read: connection reset by peer
	W1109 00:28:56.704478  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59704->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:56.704512  920123 retry.go:31] will retry after 811.706596ms: ssh: handshake failed: read tcp 127.0.0.1:59704->127.0.0.1:33939: read: connection reset by peer
	W1109 00:28:57.516830  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59714->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:57.516918  920123 retry.go:31] will retry after 350.466325ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59714->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:57.868588  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:28:57.894000  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:28:57.894883  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59720->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:57.894907  920123 retry.go:31] will retry after 250.501985ms: ssh: handshake failed: read tcp 127.0.0.1:59720->127.0.0.1:33939: read: connection reset by peer
	W1109 00:28:58.146762  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:28:58.146792  920123 retry.go:31] will retry after 406.696728ms: ssh: handshake failed: EOF
	W1109 00:28:58.554543  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59740->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:58.554574  920123 retry.go:31] will retry after 702.656638ms: ssh: handshake failed: read tcp 127.0.0.1:59740->127.0.0.1:33939: read: connection reset by peer
	W1109 00:28:59.257946  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59750->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:59.258031  920123 provision.go:86] duration metric: configureAuth took 3.932232507s
	W1109 00:28:59.258045  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59750->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:59.258058  920123 retry.go:31] will retry after 149.173µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59750->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:59.259172  920123 provision.go:83] configureAuth start
	I1109 00:28:59.259275  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:28:59.277366  920123 provision.go:138] copyHostCerts
	I1109 00:28:59.277495  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:28:59.277513  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:28:59.277586  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:28:59.277692  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:28:59.277703  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:28:59.277725  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:28:59.277783  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:28:59.277797  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:28:59.277818  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:28:59.277862  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:28:59.526299  920123 provision.go:172] copyRemoteCerts
	I1109 00:28:59.526371  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:28:59.526418  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:28:59.548199  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:28:59.549169  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59754->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:59.549194  920123 retry.go:31] will retry after 156.019838ms: ssh: handshake failed: read tcp 127.0.0.1:59754->127.0.0.1:33939: read: connection reset by peer
	W1109 00:28:59.705942  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59768->127.0.0.1:33939: read: connection reset by peer
	I1109 00:28:59.705973  920123 retry.go:31] will retry after 416.707325ms: ssh: handshake failed: read tcp 127.0.0.1:59768->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:00.124320  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59784->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:00.124355  920123 retry.go:31] will retry after 814.520191ms: ssh: handshake failed: read tcp 127.0.0.1:59784->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:00.939686  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59788->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:00.939790  920123 retry.go:31] will retry after 173.883101ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59788->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:01.114306  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:01.136231  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:01.137365  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59796->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:01.137392  920123 retry.go:31] will retry after 175.207996ms: ssh: handshake failed: read tcp 127.0.0.1:59796->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:01.313242  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:01.313271  920123 retry.go:31] will retry after 235.669632ms: ssh: handshake failed: EOF
	W1109 00:29:01.550208  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:01.550240  920123 retry.go:31] will retry after 604.45301ms: ssh: handshake failed: EOF
	W1109 00:29:02.155370  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:02.155408  920123 retry.go:31] will retry after 562.809802ms: ssh: handshake failed: EOF
	W1109 00:29:02.719617  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:02.719696  920123 provision.go:86] duration metric: configureAuth took 3.460504412s
	W1109 00:29:02.719708  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:02.719718  920123 retry.go:31] will retry after 207.231µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:02.720843  920123 provision.go:83] configureAuth start
	I1109 00:29:02.720939  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:02.740002  920123 provision.go:138] copyHostCerts
	I1109 00:29:02.740078  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:02.740092  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:02.740158  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:02.740258  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:02.740268  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:02.740293  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:02.740351  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:02.740360  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:02.740380  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:02.740445  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:03.098833  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:03.098910  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:03.098956  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:03.122436  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:03.123343  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:03.123368  920123 retry.go:31] will retry after 267.005826ms: ssh: handshake failed: EOF
	W1109 00:29:03.391303  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50582->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:03.391333  920123 retry.go:31] will retry after 190.961816ms: ssh: handshake failed: read tcp 127.0.0.1:50582->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:03.583212  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50592->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:03.583241  920123 retry.go:31] will retry after 637.573432ms: ssh: handshake failed: read tcp 127.0.0.1:50592->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:04.221616  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50604->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:04.221684  920123 retry.go:31] will retry after 147.706404ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50604->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:04.370073  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:04.388824  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:04.389692  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50618->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:04.389715  920123 retry.go:31] will retry after 260.072621ms: ssh: handshake failed: read tcp 127.0.0.1:50618->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:04.650423  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50628->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:04.650450  920123 retry.go:31] will retry after 473.583093ms: ssh: handshake failed: read tcp 127.0.0.1:50628->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:05.124672  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50638->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:05.124701  920123 retry.go:31] will retry after 471.216581ms: ssh: handshake failed: read tcp 127.0.0.1:50638->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:05.596775  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50640->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:05.596804  920123 retry.go:31] will retry after 449.363311ms: ssh: handshake failed: read tcp 127.0.0.1:50640->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:06.046861  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:06.046942  920123 provision.go:86] duration metric: configureAuth took 3.326077911s
	W1109 00:29:06.046967  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:06.046983  920123 retry.go:31] will retry after 218.776µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:06.048144  920123 provision.go:83] configureAuth start
	I1109 00:29:06.048262  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:06.068043  920123 provision.go:138] copyHostCerts
	I1109 00:29:06.068119  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:06.068133  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:06.068197  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:06.068318  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:06.068336  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:06.068368  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:06.068496  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:06.068507  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:06.068533  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:06.068597  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:06.270318  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:06.270391  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:06.270436  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:06.289698  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:06.290603  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50654->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:06.290627  920123 retry.go:31] will retry after 254.076ms: ssh: handshake failed: read tcp 127.0.0.1:50654->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:06.545571  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50666->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:06.545602  920123 retry.go:31] will retry after 520.695228ms: ssh: handshake failed: read tcp 127.0.0.1:50666->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:07.067838  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50674->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:07.067870  920123 retry.go:31] will retry after 735.383512ms: ssh: handshake failed: read tcp 127.0.0.1:50674->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:07.804301  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50678->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:07.804393  920123 retry.go:31] will retry after 199.175677ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50678->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:08.003846  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:08.022936  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:08.023858  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:08.023886  920123 retry.go:31] will retry after 157.722006ms: ssh: handshake failed: EOF
	W1109 00:29:08.182694  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50698->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:08.182724  920123 retry.go:31] will retry after 468.227483ms: ssh: handshake failed: read tcp 127.0.0.1:50698->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:08.651639  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50700->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:08.651669  920123 retry.go:31] will retry after 501.78575ms: ssh: handshake failed: read tcp 127.0.0.1:50700->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:09.154997  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50704->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:09.155077  920123 provision.go:86] duration metric: configureAuth took 3.106890253s
	W1109 00:29:09.155089  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50704->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:09.155099  920123 retry.go:31] will retry after 353.264µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50704->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:09.156200  920123 provision.go:83] configureAuth start
	I1109 00:29:09.156302  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:09.179636  920123 provision.go:138] copyHostCerts
	I1109 00:29:09.179707  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:09.179722  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:09.179794  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:09.179900  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:09.179909  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:09.179936  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:09.179996  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:09.180007  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:09.180028  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:09.180084  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:09.340830  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:09.340901  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:09.340958  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:09.359603  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:09.360469  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:09.360488  920123 retry.go:31] will retry after 319.303917ms: ssh: handshake failed: EOF
	W1109 00:29:09.680516  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50722->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:09.680546  920123 retry.go:31] will retry after 320.72252ms: ssh: handshake failed: read tcp 127.0.0.1:50722->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:10.004154  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50734->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:10.004188  920123 retry.go:31] will retry after 528.260484ms: ssh: handshake failed: read tcp 127.0.0.1:50734->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:10.533251  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50748->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:10.533321  920123 retry.go:31] will retry after 372.270821ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50748->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:10.905874  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:10.925971  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:10.926853  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:10.926896  920123 retry.go:31] will retry after 351.989731ms: ssh: handshake failed: EOF
	W1109 00:29:11.279856  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:11.279893  920123 retry.go:31] will retry after 493.396837ms: ssh: handshake failed: EOF
	W1109 00:29:11.773890  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50786->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:11.773926  920123 retry.go:31] will retry after 517.940005ms: ssh: handshake failed: read tcp 127.0.0.1:50786->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:12.293189  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55280->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:12.293272  920123 provision.go:86] duration metric: configureAuth took 3.137050844s
	W1109 00:29:12.293283  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:55280->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:12.293304  920123 retry.go:31] will retry after 576.419µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:55280->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:12.294412  920123 provision.go:83] configureAuth start
	I1109 00:29:12.294508  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:12.314569  920123 provision.go:138] copyHostCerts
	I1109 00:29:12.314640  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:12.314653  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:12.314718  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:12.314819  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:12.314833  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:12.314858  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:12.314915  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:12.314924  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:12.314944  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:12.314996  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:13.319431  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:13.319506  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:13.319561  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:13.338105  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:13.339022  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55286->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:13.339053  920123 retry.go:31] will retry after 129.436706ms: ssh: handshake failed: read tcp 127.0.0.1:55286->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:13.469850  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55296->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:13.469882  920123 retry.go:31] will retry after 310.934933ms: ssh: handshake failed: read tcp 127.0.0.1:55296->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:13.781704  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:13.781750  920123 retry.go:31] will retry after 679.866236ms: ssh: handshake failed: EOF
	W1109 00:29:14.463110  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:14.463194  920123 retry.go:31] will retry after 200.133841ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:14.663611  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:14.685471  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:14.686471  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:14.686502  920123 retry.go:31] will retry after 172.883191ms: ssh: handshake failed: EOF
	W1109 00:29:14.860653  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:14.860683  920123 retry.go:31] will retry after 509.750599ms: ssh: handshake failed: EOF
	W1109 00:29:15.371956  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55338->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:15.371982  920123 retry.go:31] will retry after 361.302132ms: ssh: handshake failed: read tcp 127.0.0.1:55338->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:15.733912  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:15.734010  920123 provision.go:86] duration metric: configureAuth took 3.439579995s
	W1109 00:29:15.734022  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:15.734033  920123 retry.go:31] will retry after 1.099294ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:15.736201  920123 provision.go:83] configureAuth start
	I1109 00:29:15.736293  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:15.755253  920123 provision.go:138] copyHostCerts
	I1109 00:29:15.755326  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:15.755339  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:15.755411  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:15.755516  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:15.755525  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:15.755547  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:15.755612  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:15.755622  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:15.755644  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:15.755696  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:17.467740  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:17.467816  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:17.467865  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:17.487476  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:17.488377  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55352->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:17.488404  920123 retry.go:31] will retry after 195.674171ms: ssh: handshake failed: read tcp 127.0.0.1:55352->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:17.685064  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:17.685104  920123 retry.go:31] will retry after 459.820456ms: ssh: handshake failed: EOF
	W1109 00:29:18.146410  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55364->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:18.146440  920123 retry.go:31] will retry after 423.819676ms: ssh: handshake failed: read tcp 127.0.0.1:55364->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:18.571495  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55380->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:18.571569  920123 retry.go:31] will retry after 214.560616ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:55380->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:18.787004  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:18.824010  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:18.824968  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:18.824987  920123 retry.go:31] will retry after 159.841027ms: ssh: handshake failed: EOF
	W1109 00:29:18.985912  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55390->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:18.985943  920123 retry.go:31] will retry after 396.588051ms: ssh: handshake failed: read tcp 127.0.0.1:55390->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:19.384124  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55398->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:19.384156  920123 retry.go:31] will retry after 704.019283ms: ssh: handshake failed: read tcp 127.0.0.1:55398->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:20.089476  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:20.089506  920123 retry.go:31] will retry after 537.538744ms: ssh: handshake failed: EOF
	W1109 00:29:20.628758  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55402->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:20.628837  920123 provision.go:86] duration metric: configureAuth took 4.892620641s
	W1109 00:29:20.628852  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:55402->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:20.628868  920123 retry.go:31] will retry after 1.51377ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:55402->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:20.631049  920123 provision.go:83] configureAuth start
	I1109 00:29:20.631171  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:20.650443  920123 provision.go:138] copyHostCerts
	I1109 00:29:20.650510  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:20.650524  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:20.650589  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:20.650679  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:20.650689  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:20.650713  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:20.650769  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:20.650778  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:20.650798  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:20.650846  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:21.475157  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:21.475225  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:21.475270  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:21.494131  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:21.495149  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55408->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:21.495175  920123 retry.go:31] will retry after 331.425318ms: ssh: handshake failed: read tcp 127.0.0.1:55408->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:21.828323  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:55410->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:21.828353  920123 retry.go:31] will retry after 460.103172ms: ssh: handshake failed: read tcp 127.0.0.1:55410->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:22.289146  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43710->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:22.289193  920123 retry.go:31] will retry after 556.870429ms: ssh: handshake failed: read tcp 127.0.0.1:43710->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:22.847170  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43724->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:22.847242  920123 retry.go:31] will retry after 169.266308ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43724->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:23.017683  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:23.037208  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:23.038146  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:23.038170  920123 retry.go:31] will retry after 334.985624ms: ssh: handshake failed: EOF
	W1109 00:29:23.374238  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:23.374265  920123 retry.go:31] will retry after 190.109212ms: ssh: handshake failed: EOF
	W1109 00:29:23.565024  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43760->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:23.565054  920123 retry.go:31] will retry after 507.229051ms: ssh: handshake failed: read tcp 127.0.0.1:43760->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:24.073165  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:24.073196  920123 retry.go:31] will retry after 493.924502ms: ssh: handshake failed: EOF
	W1109 00:29:24.568405  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:24.568486  920123 provision.go:86] duration metric: configureAuth took 3.937407145s
	W1109 00:29:24.568499  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:24.568512  920123 retry.go:31] will retry after 1.910824ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:24.570691  920123 provision.go:83] configureAuth start
	I1109 00:29:24.570785  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:24.588785  920123 provision.go:138] copyHostCerts
	I1109 00:29:24.588862  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:24.588878  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:24.588942  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:24.589053  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:24.589064  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:24.589089  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:24.589145  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:24.589153  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:24.589172  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:24.589221  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:25.309995  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:25.310069  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:25.310111  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:25.329566  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:25.330429  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43798->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:25.330451  920123 retry.go:31] will retry after 154.491447ms: ssh: handshake failed: read tcp 127.0.0.1:43798->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:25.486283  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43804->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:25.486311  920123 retry.go:31] will retry after 410.636872ms: ssh: handshake failed: read tcp 127.0.0.1:43804->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:25.897912  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43806->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:25.897942  920123 retry.go:31] will retry after 577.386264ms: ssh: handshake failed: read tcp 127.0.0.1:43806->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:26.476119  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43818->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:26.476155  920123 retry.go:31] will retry after 677.389203ms: ssh: handshake failed: read tcp 127.0.0.1:43818->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:27.154171  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:27.154251  920123 provision.go:86] duration metric: configureAuth took 2.583540552s
	W1109 00:29:27.154262  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:27.154272  920123 retry.go:31] will retry after 1.971734ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:27.156452  920123 provision.go:83] configureAuth start
	I1109 00:29:27.156541  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:27.179115  920123 provision.go:138] copyHostCerts
	I1109 00:29:27.179182  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:27.179195  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:27.179257  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:27.179358  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:27.179368  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:27.179390  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:27.179453  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:27.179462  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:27.179482  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:27.179528  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:27.721987  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:27.722083  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:27.722143  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:27.741085  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:27.741970  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:27.741995  920123 retry.go:31] will retry after 358.96772ms: ssh: handshake failed: EOF
	W1109 00:29:28.101992  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43858->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:28.102022  920123 retry.go:31] will retry after 195.094272ms: ssh: handshake failed: read tcp 127.0.0.1:43858->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:28.297816  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43860->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:28.297849  920123 retry.go:31] will retry after 680.791082ms: ssh: handshake failed: read tcp 127.0.0.1:43860->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:28.980276  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43872->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:28.980311  920123 retry.go:31] will retry after 754.006605ms: ssh: handshake failed: read tcp 127.0.0.1:43872->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:29.735674  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43882->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:29.735752  920123 provision.go:86] duration metric: configureAuth took 2.579279472s
	W1109 00:29:29.735764  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43882->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:29.735774  920123 retry.go:31] will retry after 3.387897ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43882->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:29.739943  920123 provision.go:83] configureAuth start
	I1109 00:29:29.740042  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:29.760607  920123 provision.go:138] copyHostCerts
	I1109 00:29:29.760679  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:29.760693  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:29.760755  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:29.760848  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:29.760862  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:29.760883  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:29.760933  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:29.760943  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:29.760962  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:29.761008  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:30.306288  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:30.306378  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:30.306439  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:30.330107  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:30.331019  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:30.331041  920123 retry.go:31] will retry after 144.466989ms: ssh: handshake failed: EOF
	W1109 00:29:30.476815  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:30.476844  920123 retry.go:31] will retry after 267.001916ms: ssh: handshake failed: EOF
	W1109 00:29:30.744824  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43894->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:30.744853  920123 retry.go:31] will retry after 353.737716ms: ssh: handshake failed: read tcp 127.0.0.1:43894->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:31.099917  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43896->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:31.099950  920123 retry.go:31] will retry after 645.993329ms: ssh: handshake failed: read tcp 127.0.0.1:43896->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:31.747437  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43900->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:31.747527  920123 retry.go:31] will retry after 217.622261ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43900->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:31.966022  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:31.984552  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:31.985416  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:31.985465  920123 retry.go:31] will retry after 288.09623ms: ssh: handshake failed: EOF
	W1109 00:29:32.274383  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59280->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:32.274417  920123 retry.go:31] will retry after 232.775276ms: ssh: handshake failed: read tcp 127.0.0.1:59280->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:32.508171  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59294->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:32.508202  920123 retry.go:31] will retry after 709.299417ms: ssh: handshake failed: read tcp 127.0.0.1:59294->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:33.218514  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59304->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:33.218599  920123 provision.go:86] duration metric: configureAuth took 3.478633763s
	W1109 00:29:33.218606  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59304->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:33.218615  920123 retry.go:31] will retry after 3.862966ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59304->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:33.222765  920123 provision.go:83] configureAuth start
	I1109 00:29:33.222867  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:33.240887  920123 provision.go:138] copyHostCerts
	I1109 00:29:33.240958  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:33.240967  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:33.241026  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:33.241113  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:33.241118  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:33.241138  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:33.241188  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:33.241193  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:33.241212  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:33.241252  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:33.715032  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:33.715133  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:33.715178  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:33.735635  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:33.736546  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:33.736574  920123 retry.go:31] will retry after 252.349468ms: ssh: handshake failed: EOF
	W1109 00:29:33.990455  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:33.990487  920123 retry.go:31] will retry after 491.802169ms: ssh: handshake failed: EOF
	W1109 00:29:34.483398  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:34.483433  920123 retry.go:31] will retry after 730.351679ms: ssh: handshake failed: EOF
	W1109 00:29:35.215204  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:35.215276  920123 retry.go:31] will retry after 188.920548ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:35.404636  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:35.423014  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:35.423853  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:35.423875  920123 retry.go:31] will retry after 233.934603ms: ssh: handshake failed: EOF
	W1109 00:29:35.658802  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:35.658832  920123 retry.go:31] will retry after 319.91946ms: ssh: handshake failed: EOF
	W1109 00:29:35.979859  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59370->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:35.979886  920123 retry.go:31] will retry after 807.422227ms: ssh: handshake failed: read tcp 127.0.0.1:59370->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:36.788078  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59378->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:36.788162  920123 provision.go:86] duration metric: configureAuth took 3.565371469s
	W1109 00:29:36.788175  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59378->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:36.788187  920123 retry.go:31] will retry after 12.777116ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59378->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:36.801328  920123 provision.go:83] configureAuth start
	I1109 00:29:36.801427  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:36.835148  920123 provision.go:138] copyHostCerts
	I1109 00:29:36.835264  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:36.835279  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:36.835372  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:36.835540  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:36.835555  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:36.835610  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:36.835710  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:36.835723  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:36.835772  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:36.835860  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:37.303526  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:37.303687  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:37.303868  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:37.330847  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:37.331742  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:37.331768  920123 retry.go:31] will retry after 368.755961ms: ssh: handshake failed: EOF
	W1109 00:29:37.701822  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59400->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:37.701853  920123 retry.go:31] will retry after 530.981671ms: ssh: handshake failed: read tcp 127.0.0.1:59400->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:38.234008  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:38.234039  920123 retry.go:31] will retry after 775.407623ms: ssh: handshake failed: EOF
	W1109 00:29:39.010306  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59420->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:39.010372  920123 retry.go:31] will retry after 259.482277ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59420->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:39.270866  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:39.289352  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:39.290180  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59430->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:39.290203  920123 retry.go:31] will retry after 282.414152ms: ssh: handshake failed: read tcp 127.0.0.1:59430->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:39.574173  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:39.574205  920123 retry.go:31] will retry after 449.224308ms: ssh: handshake failed: EOF
	W1109 00:29:40.024684  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59444->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:40.024718  920123 retry.go:31] will retry after 503.115299ms: ssh: handshake failed: read tcp 127.0.0.1:59444->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:40.528781  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:40.528868  920123 provision.go:86] duration metric: configureAuth took 3.727512457s
	W1109 00:29:40.528880  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:40.528891  920123 retry.go:31] will retry after 12.865912ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:40.542096  920123 provision.go:83] configureAuth start
	I1109 00:29:40.542197  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:40.559734  920123 provision.go:138] copyHostCerts
	I1109 00:29:40.559807  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:40.559820  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:40.559891  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:40.559987  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:40.559996  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:40.560018  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:40.560072  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:40.560081  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:40.560101  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:40.560147  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:40.965824  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:40.965897  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:40.965943  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:40.984907  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:40.985858  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:40.985882  920123 retry.go:31] will retry after 321.757367ms: ssh: handshake failed: EOF
	W1109 00:29:41.308898  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59468->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:41.308928  920123 retry.go:31] will retry after 518.386772ms: ssh: handshake failed: read tcp 127.0.0.1:59468->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:41.827994  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59470->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:41.828022  920123 retry.go:31] will retry after 795.704556ms: ssh: handshake failed: read tcp 127.0.0.1:59470->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:42.624479  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:42.624554  920123 retry.go:31] will retry after 334.97686ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:42.960137  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:42.979236  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:42.980205  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:42.980225  920123 retry.go:31] will retry after 154.591378ms: ssh: handshake failed: EOF
	W1109 00:29:43.136024  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:43.136051  920123 retry.go:31] will retry after 230.655391ms: ssh: handshake failed: EOF
	W1109 00:29:43.368074  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53912->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:43.368106  920123 retry.go:31] will retry after 285.260219ms: ssh: handshake failed: read tcp 127.0.0.1:53912->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:43.654011  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53918->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:43.654038  920123 retry.go:31] will retry after 906.48724ms: ssh: handshake failed: read tcp 127.0.0.1:53918->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:44.561552  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53928->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:44.561631  920123 provision.go:86] duration metric: configureAuth took 4.019509164s
	W1109 00:29:44.561638  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53928->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:44.561647  920123 retry.go:31] will retry after 12.621991ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53928->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:44.574830  920123 provision.go:83] configureAuth start
	I1109 00:29:44.574928  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:44.594760  920123 provision.go:138] copyHostCerts
	I1109 00:29:44.594831  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:44.594846  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:44.594916  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:44.595010  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:44.595020  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:44.595043  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:44.595130  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:44.595141  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:44.595163  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:44.595210  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:45.818253  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:45.818331  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:45.818372  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:45.837070  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:45.837925  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53940->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:45.837949  920123 retry.go:31] will retry after 131.761942ms: ssh: handshake failed: read tcp 127.0.0.1:53940->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:45.970754  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53954->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:45.970784  920123 retry.go:31] will retry after 318.328077ms: ssh: handshake failed: read tcp 127.0.0.1:53954->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:46.289753  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53964->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:46.289781  920123 retry.go:31] will retry after 824.487312ms: ssh: handshake failed: read tcp 127.0.0.1:53964->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:47.115149  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:47.115223  920123 retry.go:31] will retry after 261.252205ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:47.376664  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:47.395322  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:47.396233  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:47.396253  920123 retry.go:31] will retry after 270.614189ms: ssh: handshake failed: EOF
	W1109 00:29:47.668225  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53992->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:47.668254  920123 retry.go:31] will retry after 414.92741ms: ssh: handshake failed: read tcp 127.0.0.1:53992->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:48.084484  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53996->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:48.084518  920123 retry.go:31] will retry after 317.027274ms: ssh: handshake failed: read tcp 127.0.0.1:53996->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:48.402128  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:48.402151  920123 retry.go:31] will retry after 823.443274ms: ssh: handshake failed: EOF
	W1109 00:29:49.226204  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54018->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:49.226289  920123 provision.go:86] duration metric: configureAuth took 4.651433529s
	W1109 00:29:49.226300  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54018->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:49.226311  920123 retry.go:31] will retry after 35.686655ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54018->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:49.262492  920123 provision.go:83] configureAuth start
	I1109 00:29:49.262593  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:49.289114  920123 provision.go:138] copyHostCerts
	I1109 00:29:49.289174  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:49.289182  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:49.289245  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:49.289338  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:49.289342  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:49.289363  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:49.289412  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:49.289416  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:49.289470  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:49.289521  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:49.510502  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:49.510576  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:49.510624  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:49.530057  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:49.530955  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:49.530974  920123 retry.go:31] will retry after 167.45796ms: ssh: handshake failed: EOF
	W1109 00:29:49.699741  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:49.699771  920123 retry.go:31] will retry after 407.525455ms: ssh: handshake failed: EOF
	W1109 00:29:50.108028  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:50.108060  920123 retry.go:31] will retry after 413.776079ms: ssh: handshake failed: EOF
	W1109 00:29:50.522893  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:50.522920  920123 retry.go:31] will retry after 718.951706ms: ssh: handshake failed: EOF
	W1109 00:29:51.243027  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:51.243095  920123 retry.go:31] will retry after 148.768465ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:51.392477  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:51.412615  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:51.413570  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:51.413590  920123 retry.go:31] will retry after 213.983267ms: ssh: handshake failed: EOF
	W1109 00:29:51.628407  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:51.628436  920123 retry.go:31] will retry after 285.053457ms: ssh: handshake failed: EOF
	W1109 00:29:51.914048  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:51.914077  920123 retry.go:31] will retry after 656.873115ms: ssh: handshake failed: EOF
	W1109 00:29:52.572390  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:52.572470  920123 provision.go:86] duration metric: configureAuth took 3.30995516s
	W1109 00:29:52.572480  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:52.572571  920123 retry.go:31] will retry after 42.458534ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:52.615770  920123 provision.go:83] configureAuth start
	I1109 00:29:52.615871  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:52.634713  920123 provision.go:138] copyHostCerts
	I1109 00:29:52.634786  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:52.634800  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:52.634862  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:52.634957  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:52.634966  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:52.634988  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:52.635045  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:52.635053  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:52.635074  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:52.635124  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:53.362221  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:53.362295  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:53.362338  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:53.381171  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:53.382064  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:53.382085  920123 retry.go:31] will retry after 139.305729ms: ssh: handshake failed: EOF
	W1109 00:29:53.522849  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:53.522878  920123 retry.go:31] will retry after 462.684599ms: ssh: handshake failed: EOF
	W1109 00:29:53.987071  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58226->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:53.987101  920123 retry.go:31] will retry after 325.827938ms: ssh: handshake failed: read tcp 127.0.0.1:58226->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:54.314040  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:54.314071  920123 retry.go:31] will retry after 812.80247ms: ssh: handshake failed: EOF
	W1109 00:29:55.127961  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:55.128040  920123 provision.go:86] duration metric: configureAuth took 2.512241728s
	W1109 00:29:55.128049  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:55.128063  920123 retry.go:31] will retry after 90.394201ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:29:55.219301  920123 provision.go:83] configureAuth start
	I1109 00:29:55.219434  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:55.237334  920123 provision.go:138] copyHostCerts
	I1109 00:29:55.237400  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:55.237419  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:55.237517  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:55.237612  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:55.237657  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:55.237684  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:55.237740  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:55.237751  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:55.237770  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:55.237831  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:55.801704  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:55.801778  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:55.801819  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:55.819967  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:55.820866  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58256->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:55.820890  920123 retry.go:31] will retry after 139.898921ms: ssh: handshake failed: read tcp 127.0.0.1:58256->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:55.961705  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58260->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:55.961792  920123 retry.go:31] will retry after 208.33032ms: ssh: handshake failed: read tcp 127.0.0.1:58260->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:56.171632  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:56.171663  920123 retry.go:31] will retry after 680.927007ms: ssh: handshake failed: EOF
	W1109 00:29:56.853969  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58274->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:56.854000  920123 retry.go:31] will retry after 551.06832ms: ssh: handshake failed: read tcp 127.0.0.1:58274->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:57.405911  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58278->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:57.405984  920123 retry.go:31] will retry after 345.330197ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58278->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:57.751541  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:57.770245  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:57.771173  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:57.771197  920123 retry.go:31] will retry after 317.562764ms: ssh: handshake failed: EOF
	W1109 00:29:58.090125  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58302->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:58.090165  920123 retry.go:31] will retry after 216.071554ms: ssh: handshake failed: read tcp 127.0.0.1:58302->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:58.306860  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58312->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:58.306890  920123 retry.go:31] will retry after 659.460172ms: ssh: handshake failed: read tcp 127.0.0.1:58312->127.0.0.1:33939: read: connection reset by peer
	W1109 00:29:58.967036  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58318->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:58.967117  920123 provision.go:86] duration metric: configureAuth took 3.74778792s
	W1109 00:29:58.967127  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58318->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:58.967139  920123 retry.go:31] will retry after 112.131564ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58318->127.0.0.1:33939: read: connection reset by peer
	I1109 00:29:59.079329  920123 provision.go:83] configureAuth start
	I1109 00:29:59.079441  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:29:59.097314  920123 provision.go:138] copyHostCerts
	I1109 00:29:59.097390  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:29:59.097404  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:29:59.097507  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:29:59.097614  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:29:59.097624  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:29:59.097646  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:29:59.097702  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:29:59.097709  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:29:59.097728  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:29:59.097779  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:29:59.958664  920123 provision.go:172] copyRemoteCerts
	I1109 00:29:59.958737  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:29:59.958784  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:29:59.988847  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:29:59.989836  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:29:59.989855  920123 retry.go:31] will retry after 221.424464ms: ssh: handshake failed: EOF
	W1109 00:30:00.219966  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:00.219997  920123 retry.go:31] will retry after 308.835259ms: ssh: handshake failed: EOF
	W1109 00:30:00.535739  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58334->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:00.535794  920123 retry.go:31] will retry after 525.547943ms: ssh: handshake failed: read tcp 127.0.0.1:58334->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:01.062193  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58348->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:01.062282  920123 retry.go:31] will retry after 363.308327ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58348->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:01.425776  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:01.446026  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:01.446965  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:01.446990  920123 retry.go:31] will retry after 312.292736ms: ssh: handshake failed: EOF
	W1109 00:30:01.759937  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:01.759966  920123 retry.go:31] will retry after 517.883799ms: ssh: handshake failed: EOF
	W1109 00:30:02.279286  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:02.279319  920123 retry.go:31] will retry after 682.522078ms: ssh: handshake failed: EOF
	W1109 00:30:02.962637  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:02.962735  920123 provision.go:86] duration metric: configureAuth took 3.883372291s
	W1109 00:30:02.962748  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:02.962757  920123 retry.go:31] will retry after 174.828425ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:03.138139  920123 provision.go:83] configureAuth start
	I1109 00:30:03.138247  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:03.157629  920123 provision.go:138] copyHostCerts
	I1109 00:30:03.157701  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:03.157716  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:03.157780  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:03.157878  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:03.157889  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:03.157917  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:03.158028  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:03.158040  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:03.158062  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:03.158113  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:03.890144  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:03.890215  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:03.890257  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:03.912134  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:03.912996  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:03.913016  920123 retry.go:31] will retry after 364.236699ms: ssh: handshake failed: EOF
	W1109 00:30:04.277897  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:04.277928  920123 retry.go:31] will retry after 305.808604ms: ssh: handshake failed: EOF
	W1109 00:30:04.584750  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39314->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:04.584780  920123 retry.go:31] will retry after 504.058401ms: ssh: handshake failed: read tcp 127.0.0.1:39314->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:05.090373  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39320->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:05.090445  920123 retry.go:31] will retry after 207.362524ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39320->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:05.298942  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:05.319357  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:05.320317  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:05.320339  920123 retry.go:31] will retry after 166.067608ms: ssh: handshake failed: EOF
	W1109 00:30:05.487105  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:05.487139  920123 retry.go:31] will retry after 504.162776ms: ssh: handshake failed: EOF
	W1109 00:30:05.991895  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:05.991924  920123 retry.go:31] will retry after 809.391574ms: ssh: handshake failed: EOF
	W1109 00:30:06.801999  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39356->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:06.802098  920123 provision.go:86] duration metric: configureAuth took 3.663926213s
	W1109 00:30:06.802109  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39356->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:06.802120  920123 retry.go:31] will retry after 307.455903ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39356->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:07.110624  920123 provision.go:83] configureAuth start
	I1109 00:30:07.110738  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:07.129049  920123 provision.go:138] copyHostCerts
	I1109 00:30:07.129122  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:07.129135  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:07.129202  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:07.129307  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:07.129317  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:07.129340  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:07.129394  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:07.129403  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:07.129423  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:07.129508  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:08.192875  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:08.192946  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:08.192995  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:08.216134  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:08.217005  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:08.217024  920123 retry.go:31] will retry after 322.021776ms: ssh: handshake failed: EOF
	W1109 00:30:08.539656  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:08.539682  920123 retry.go:31] will retry after 269.650874ms: ssh: handshake failed: EOF
	W1109 00:30:08.810350  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:08.810380  920123 retry.go:31] will retry after 478.589587ms: ssh: handshake failed: EOF
	W1109 00:30:09.290584  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39402->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:09.290653  920123 retry.go:31] will retry after 236.901118ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39402->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:09.528108  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:09.546797  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:09.547663  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39406->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:09.547685  920123 retry.go:31] will retry after 333.986003ms: ssh: handshake failed: read tcp 127.0.0.1:39406->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:09.882675  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:09.882705  920123 retry.go:31] will retry after 481.545934ms: ssh: handshake failed: EOF
	W1109 00:30:10.364976  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39436->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:10.365012  920123 retry.go:31] will retry after 744.701253ms: ssh: handshake failed: read tcp 127.0.0.1:39436->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:11.114052  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:11.114137  920123 provision.go:86] duration metric: configureAuth took 4.003486305s
	W1109 00:30:11.114145  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:11.114154  920123 retry.go:31] will retry after 491.802052ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:11.606594  920123 provision.go:83] configureAuth start
	I1109 00:30:11.606681  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:11.628424  920123 provision.go:138] copyHostCerts
	I1109 00:30:11.628491  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:11.628504  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:11.628567  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:11.628659  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:11.628667  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:11.628689  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:11.628741  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:11.628751  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:11.628770  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:11.628812  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:11.910326  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:11.910402  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:11.910447  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:11.945609  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:11.946608  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:11.946626  920123 retry.go:31] will retry after 136.855713ms: ssh: handshake failed: EOF
	W1109 00:30:12.084423  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:12.084457  920123 retry.go:31] will retry after 553.215528ms: ssh: handshake failed: EOF
	W1109 00:30:12.638482  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:12.638508  920123 retry.go:31] will retry after 362.452178ms: ssh: handshake failed: EOF
	W1109 00:30:13.002825  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:13.002859  920123 retry.go:31] will retry after 643.992587ms: ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:13.647784  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37066->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:13.647859  920123 retry.go:31] will retry after 156.218637ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37066->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:13.805231  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:13.829262  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:13.830267  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:13.830294  920123 retry.go:31] will retry after 323.402084ms: ssh: handshake failed: EOF
	W1109 00:30:14.155128  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37082->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:14.155154  920123 retry.go:31] will retry after 373.20988ms: ssh: handshake failed: read tcp 127.0.0.1:37082->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:14.529099  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37096->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:14.529127  920123 retry.go:31] will retry after 614.721581ms: ssh: handshake failed: read tcp 127.0.0.1:37096->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:15.144546  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37100->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:15.144631  920123 provision.go:86] duration metric: configureAuth took 3.538014742s
	W1109 00:30:15.144643  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37100->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:15.144655  920123 retry.go:31] will retry after 267.106525ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37100->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:15.411940  920123 provision.go:83] configureAuth start
	I1109 00:30:15.412047  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:15.455538  920123 provision.go:138] copyHostCerts
	I1109 00:30:15.455600  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:15.455609  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:15.455693  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:15.455789  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:15.455800  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:15.455829  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:15.455904  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:15.455912  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:15.455939  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:15.455999  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:16.002924  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:16.003061  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:16.003141  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:16.024041  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:16.025078  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37114->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:16.025149  920123 retry.go:31] will retry after 281.549661ms: ssh: handshake failed: read tcp 127.0.0.1:37114->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:16.307860  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37126->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:16.307887  920123 retry.go:31] will retry after 394.207728ms: ssh: handshake failed: read tcp 127.0.0.1:37126->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:16.703474  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37140->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:16.703508  920123 retry.go:31] will retry after 659.078686ms: ssh: handshake failed: read tcp 127.0.0.1:37140->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:17.363255  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:17.363333  920123 retry.go:31] will retry after 314.239096ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:17.677780  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:17.702675  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:17.703661  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37168->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:17.703682  920123 retry.go:31] will retry after 183.401918ms: ssh: handshake failed: read tcp 127.0.0.1:37168->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:17.888465  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37172->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:17.888489  920123 retry.go:31] will retry after 540.313307ms: ssh: handshake failed: read tcp 127.0.0.1:37172->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:18.429691  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37182->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:18.429730  920123 retry.go:31] will retry after 485.800089ms: ssh: handshake failed: read tcp 127.0.0.1:37182->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:18.916845  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37192->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:18.916872  920123 retry.go:31] will retry after 669.754551ms: ssh: handshake failed: read tcp 127.0.0.1:37192->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:19.588198  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:19.588275  920123 provision.go:86] duration metric: configureAuth took 4.17630801s
	W1109 00:30:19.588282  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:19.588291  920123 retry.go:31] will retry after 545.837115ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:20.135124  920123 provision.go:83] configureAuth start
	I1109 00:30:20.135224  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:20.162198  920123 provision.go:138] copyHostCerts
	I1109 00:30:20.162273  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:20.162288  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:20.162367  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:20.162515  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:20.162528  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:20.162561  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:20.162622  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:20.162635  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:20.162663  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:20.162714  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:21.617160  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:21.617237  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:21.617287  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:21.635497  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:21.636313  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37206->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:21.636333  920123 retry.go:31] will retry after 140.273056ms: ssh: handshake failed: read tcp 127.0.0.1:37206->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:21.778089  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:21.778119  920123 retry.go:31] will retry after 353.736488ms: ssh: handshake failed: EOF
	W1109 00:30:22.133266  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50754->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:22.133298  920123 retry.go:31] will retry after 399.208152ms: ssh: handshake failed: read tcp 127.0.0.1:50754->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:22.533852  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50766->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:22.533883  920123 retry.go:31] will retry after 922.359387ms: ssh: handshake failed: read tcp 127.0.0.1:50766->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:23.456935  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50776->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:23.457021  920123 provision.go:86] duration metric: configureAuth took 3.321873957s
	W1109 00:30:23.457029  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50776->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:23.457039  920123 retry.go:31] will retry after 1.108275386s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50776->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:24.565549  920123 provision.go:83] configureAuth start
	I1109 00:30:24.565653  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:24.588132  920123 provision.go:138] copyHostCerts
	I1109 00:30:24.588223  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:24.588254  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:24.588368  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:24.588505  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:24.588519  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:24.588554  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:24.588640  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:24.588668  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:24.588707  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:24.588783  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:24.786515  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:24.786587  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:24.786634  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:24.813393  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:24.814237  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:24.814258  920123 retry.go:31] will retry after 331.27389ms: ssh: handshake failed: read tcp 127.0.0.1:50790->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:25.146767  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:25.146799  920123 retry.go:31] will retry after 431.039768ms: ssh: handshake failed: EOF
	W1109 00:30:25.578507  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:25.578531  920123 retry.go:31] will retry after 660.576228ms: ssh: handshake failed: EOF
	W1109 00:30:26.239644  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:26.239712  920123 retry.go:31] will retry after 268.804072ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:26.509201  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:26.531054  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:26.531989  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50822->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:26.532018  920123 retry.go:31] will retry after 175.315902ms: ssh: handshake failed: read tcp 127.0.0.1:50822->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:26.708820  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50834->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:26.708853  920123 retry.go:31] will retry after 530.667093ms: ssh: handshake failed: read tcp 127.0.0.1:50834->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:27.241020  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50836->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:27.241060  920123 retry.go:31] will retry after 775.628166ms: ssh: handshake failed: read tcp 127.0.0.1:50836->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:28.017282  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:28.017379  920123 provision.go:86] duration metric: configureAuth took 3.451803964s
	W1109 00:30:28.017388  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:28.017402  920123 retry.go:31] will retry after 1.528652956s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:29.546170  920123 provision.go:83] configureAuth start
	I1109 00:30:29.546255  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:29.566574  920123 provision.go:138] copyHostCerts
	I1109 00:30:29.566631  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:29.566641  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:29.566714  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:29.566802  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:29.566807  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:29.566831  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:29.566885  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:29.566889  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:29.566911  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:29.566952  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:29.995365  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:29.995490  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:29.995564  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:30.026391  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:30.027437  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50842->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:30.027463  920123 retry.go:31] will retry after 269.301364ms: ssh: handshake failed: read tcp 127.0.0.1:50842->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:30.298152  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50844->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:30.298177  920123 retry.go:31] will retry after 188.683791ms: ssh: handshake failed: read tcp 127.0.0.1:50844->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:30.488011  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50852->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:30.488035  920123 retry.go:31] will retry after 723.899519ms: ssh: handshake failed: read tcp 127.0.0.1:50852->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:31.213418  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50856->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:31.213472  920123 retry.go:31] will retry after 639.335876ms: ssh: handshake failed: read tcp 127.0.0.1:50856->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:31.853504  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50860->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:31.853585  920123 provision.go:86] duration metric: configureAuth took 2.307393594s
	W1109 00:30:31.853599  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50860->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:31.853611  920123 retry.go:31] will retry after 2.917444052s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50860->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:34.773518  920123 provision.go:83] configureAuth start
	I1109 00:30:34.773646  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:34.797560  920123 provision.go:138] copyHostCerts
	I1109 00:30:34.797626  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:34.797635  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:34.797698  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:34.797789  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:34.797794  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:34.797817  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:34.797865  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:34.797869  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:34.797887  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:34.797929  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:35.853661  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:35.853780  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:35.853855  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:35.874174  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:35.875081  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:35.875107  920123 retry.go:31] will retry after 220.435961ms: ssh: handshake failed: EOF
	W1109 00:30:36.096964  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:36.096994  920123 retry.go:31] will retry after 284.489921ms: ssh: handshake failed: EOF
	W1109 00:30:36.382953  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52552->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:36.382987  920123 retry.go:31] will retry after 757.504928ms: ssh: handshake failed: read tcp 127.0.0.1:52552->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:37.141110  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:37.141139  920123 retry.go:31] will retry after 677.317811ms: ssh: handshake failed: EOF
	W1109 00:30:37.819089  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:37.819165  920123 provision.go:86] duration metric: configureAuth took 3.045611092s
	W1109 00:30:37.819173  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:37.819182  920123 retry.go:31] will retry after 3.84030342s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:41.660355  920123 provision.go:83] configureAuth start
	I1109 00:30:41.660455  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:41.684559  920123 provision.go:138] copyHostCerts
	I1109 00:30:41.684631  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:41.684640  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:41.684704  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:41.684796  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:41.684801  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:41.684822  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:41.684873  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:41.684878  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:41.684897  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:41.684937  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:42.430277  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:42.430387  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:42.430464  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:42.456248  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:42.457168  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:42.457224  920123 retry.go:31] will retry after 320.815217ms: ssh: handshake failed: EOF
	W1109 00:30:42.779024  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40482->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:42.779059  920123 retry.go:31] will retry after 372.154079ms: ssh: handshake failed: read tcp 127.0.0.1:40482->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:43.151899  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40492->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:43.151934  920123 retry.go:31] will retry after 532.078043ms: ssh: handshake failed: read tcp 127.0.0.1:40492->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:43.684599  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:43.684631  920123 retry.go:31] will retry after 538.78746ms: ssh: handshake failed: EOF
	W1109 00:30:44.224831  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:44.224900  920123 retry.go:31] will retry after 153.829408ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:30:44.379292  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:44.411663  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:44.412579  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:44.412604  920123 retry.go:31] will retry after 140.702226ms: ssh: handshake failed: EOF
	W1109 00:30:44.554388  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40534->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:44.554418  920123 retry.go:31] will retry after 363.447327ms: ssh: handshake failed: read tcp 127.0.0.1:40534->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:44.918656  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40546->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:44.918682  920123 retry.go:31] will retry after 351.917985ms: ssh: handshake failed: read tcp 127.0.0.1:40546->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:45.274402  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40556->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:45.274497  920123 retry.go:31] will retry after 699.105838ms: ssh: handshake failed: read tcp 127.0.0.1:40556->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:45.974711  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40572->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:45.974791  920123 provision.go:86] duration metric: configureAuth took 4.314410603s
	W1109 00:30:45.974802  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40572->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:45.974811  920123 retry.go:31] will retry after 4.991117231s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40572->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:50.966642  920123 provision.go:83] configureAuth start
	I1109 00:30:50.966758  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:30:50.985528  920123 provision.go:138] copyHostCerts
	I1109 00:30:50.985594  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:30:50.985610  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:30:50.985674  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:30:50.985781  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:30:50.985792  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:30:50.985817  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:30:50.985892  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:30:50.985902  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:30:50.985923  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:30:50.985979  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:30:51.174055  920123 provision.go:172] copyRemoteCerts
	I1109 00:30:51.174131  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:30:51.174176  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:51.193404  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:51.194341  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:51.194362  920123 retry.go:31] will retry after 178.682005ms: ssh: handshake failed: EOF
	W1109 00:30:51.374654  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:51.374683  920123 retry.go:31] will retry after 410.947728ms: ssh: handshake failed: EOF
	W1109 00:30:51.786733  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40604->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:51.786762  920123 retry.go:31] will retry after 523.915791ms: ssh: handshake failed: read tcp 127.0.0.1:40604->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:52.312005  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60596->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:52.312038  920123 retry.go:31] will retry after 665.839722ms: ssh: handshake failed: read tcp 127.0.0.1:60596->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:52.979117  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:52.979196  920123 provision.go:86] duration metric: configureAuth took 2.012527932s
	W1109 00:30:52.979208  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:52.979220  920123 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:52.979228  920123 machine.go:91] provisioned docker machine in 7m57.851780589s
	I1109 00:30:52.979311  920123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 00:30:52.979361  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:52.997355  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:52.998616  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60620->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:52.998647  920123 retry.go:31] will retry after 353.277203ms: ssh: handshake failed: read tcp 127.0.0.1:60620->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:53.352469  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60634->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:53.352496  920123 retry.go:31] will retry after 207.138709ms: ssh: handshake failed: read tcp 127.0.0.1:60634->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:53.560223  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:53.560252  920123 retry.go:31] will retry after 598.35039ms: ssh: handshake failed: EOF
	W1109 00:30:54.159217  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60660->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:54.159254  920123 retry.go:31] will retry after 632.67631ms: ssh: handshake failed: read tcp 127.0.0.1:60660->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:54.792681  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60666->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:54.792756  920123 start.go:275] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60666->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:54.792766  920123 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60666->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:54.792821  920123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 00:30:54.792872  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:30:54.814479  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:30:54.815355  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:30:54.815378  920123 retry.go:31] will retry after 168.116478ms: ssh: handshake failed: EOF
	W1109 00:30:54.984191  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:54.984232  920123 retry.go:31] will retry after 372.897171ms: ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:55.358209  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60692->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:55.358237  920123 retry.go:31] will retry after 475.992129ms: ssh: handshake failed: read tcp 127.0.0.1:60692->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:55.835193  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60704->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:55.835215  920123 retry.go:31] will retry after 824.08858ms: ssh: handshake failed: read tcp 127.0.0.1:60704->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:56.660530  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60710->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:56.660623  920123 start.go:290] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60710->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:56.660633  920123 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60710->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:56.660639  920123 fix.go:56] fixHost completed within 8m1.561716044s
	I1109 00:30:56.660645  920123 start.go:83] releasing machines lock for "no-preload-881977", held for 8m1.561762198s
	W1109 00:30:56.660658  920123 start.go:691] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	W1109 00:30:56.660724  920123 out.go:239] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60612->127.0.0.1:33939: read: connection reset by peer
	I1109 00:30:56.660732  920123 start.go:706] Will try again in 5 seconds ...
	I1109 00:31:01.660882  920123 start.go:365] acquiring machines lock for no-preload-881977: {Name:mk3b964979021e50618b8ac49e6dc994101d0e99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 00:31:01.661003  920123 start.go:369] acquired machines lock for "no-preload-881977" in 79.549µs
	I1109 00:31:01.661039  920123 start.go:96] Skipping create...Using existing machine configuration
	I1109 00:31:01.661056  920123 fix.go:54] fixHost starting: 
	I1109 00:31:01.661336  920123 cli_runner.go:164] Run: docker container inspect no-preload-881977 --format={{.State.Status}}
	I1109 00:31:01.682940  920123 fix.go:102] recreateIfNeeded on no-preload-881977: state=Running err=<nil>
	W1109 00:31:01.682970  920123 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 00:31:01.685521  920123 out.go:177] * Updating the running docker "no-preload-881977" container ...
	I1109 00:31:01.687226  920123 machine.go:88] provisioning docker machine ...
	I1109 00:31:01.687257  920123 ubuntu.go:169] provisioning hostname "no-preload-881977"
	I1109 00:31:01.687356  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:31:01.706138  920123 main.go:141] libmachine: Using SSH client type: native
	I1109 00:31:01.706547  920123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33939 <nil> <nil>}
	I1109 00:31:01.706569  920123 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-881977 && echo "no-preload-881977" | sudo tee /etc/hostname
	I1109 00:31:01.707009  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:04.708123  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:07.710604  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:10.711259  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34480->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:13.711869  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:16.713084  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50236->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:19.715540  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:22.718037  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48340->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:25.719024  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48356->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:28.719823  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48358->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:31.720510  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48364->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:34.722938  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40702->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:37.723694  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:31:40.724973  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40710->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:43.725756  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58578->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:46.726509  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58594->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:49.729138  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58608->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:52.730053  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48116->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:55.730776  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48118->127.0.0.1:33939: read: connection reset by peer
	I1109 00:31:58.733320  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48134->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:01.734675  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48136->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:04.737246  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:07.738112  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34098->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:10.738700  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:13.739276  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:16.740251  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40002->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:19.740971  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40004->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:22.742024  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40116->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:25.743266  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40128->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:28.743938  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40138->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:31.746100  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:34.748415  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34458->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:37.750320  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:40.752010  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34470->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:43.754092  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:32:46.755313  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38398->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:49.757193  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38408->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:52.759176  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52788->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:55.760436  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52804->127.0.0.1:33939: read: connection reset by peer
	I1109 00:32:58.761079  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52810->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:01.762066  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52814->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:04.763953  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46104->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:07.766268  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46118->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:10.767918  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46120->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:13.770373  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43144->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:16.771228  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43148->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:19.773734  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43158->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:22.775293  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53422->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:25.776917  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53430->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:28.778281  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53444->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:31.779160  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53450->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:34.781795  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34968->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:37.783773  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34984->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:40.784398  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34996->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:43.786045  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48220->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:46.787624  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48222->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:49.790122  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48226->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:52.793009  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:33:55.793849  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59730->127.0.0.1:33939: read: connection reset by peer
	I1109 00:33:58.795459  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59744->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:01.797006  920123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 00:34:01.797119  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:34:01.816681  920123 main.go:141] libmachine: Using SSH client type: native
	I1109 00:34:01.817091  920123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33939 <nil> <nil>}
	I1109 00:34:01.817115  920123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-881977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-881977/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-881977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 00:34:01.817693  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59752->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:04.818441  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33818->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:07.821259  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33820->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:10.822156  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33834->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:13.824695  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37254->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:16.829675  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:34:19.830580  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:34:22.831855  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:34:25.832533  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45202->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:28.833774  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45208->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:31.835812  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45222->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:34.837202  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40080->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:37.839315  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40088->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:40.840087  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40092->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:43.841595  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50086->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:46.842205  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:34:49.843054  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50102->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:52.844837  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47186->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:55.845503  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47192->127.0.0.1:33939: read: connection reset by peer
	I1109 00:34:58.848329  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47198->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:01.849098  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:35:04.850448  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:35:07.852606  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46070->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:10.853291  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46086->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:13.854066  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55892->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:16.855526  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55900->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:19.857090  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55908->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:22.858294  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36270->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:25.858924  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36286->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:28.860625  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36298->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:31.861956  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:35:34.862639  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55826->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:37.863768  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55834->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:40.864512  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55844->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:43.866308  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55922->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:46.867017  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:35:49.868511  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55940->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:52.869968  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49098->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:55.870587  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49114->127.0.0.1:33939: read: connection reset by peer
	I1109 00:35:58.871237  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49116->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:01.871859  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49118->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:04.873303  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42272->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:07.873981  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42286->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:10.875071  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42290->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:13.876196  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49052->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:16.877295  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:19.878611  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:22.880768  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:25.881410  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:28.883537  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:31.884151  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:34.885692  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:37.888337  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43140->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:40.889104  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 00:36:43.891774  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55406->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:46.893121  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55412->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:49.894606  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55426->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:52.895754  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37738->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:55.896385  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37740->127.0.0.1:33939: read: connection reset by peer
	I1109 00:36:58.898888  920123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37752->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:01.900028  920123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 00:37:01.900055  920123 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17586-749551/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-749551/.minikube}
	I1109 00:37:01.900075  920123 ubuntu.go:177] setting up certificates
	I1109 00:37:01.900109  920123 provision.go:83] configureAuth start
	I1109 00:37:01.900182  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:01.919651  920123 provision.go:138] copyHostCerts
	I1109 00:37:01.919717  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:01.919727  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:01.919808  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:01.919991  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:01.920002  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:01.920035  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:01.920091  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:01.920097  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:01.920122  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:01.920168  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:02.068244  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:02.068320  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:02.068373  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:02.088088  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:02.088988  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:02.089008  920123 retry.go:31] will retry after 310.814151ms: ssh: handshake failed: EOF
	W1109 00:37:02.400821  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60700->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:02.400847  920123 retry.go:31] will retry after 340.196723ms: ssh: handshake failed: read tcp 127.0.0.1:60700->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:02.741692  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60716->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:02.741727  920123 retry.go:31] will retry after 329.840007ms: ssh: handshake failed: read tcp 127.0.0.1:60716->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:03.072718  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:03.072795  920123 retry.go:31] will retry after 260.136606ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:03.333996  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:03.358840  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:03.359846  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60726->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:03.359878  920123 retry.go:31] will retry after 222.402228ms: ssh: handshake failed: read tcp 127.0.0.1:60726->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:03.583635  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60732->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:03.583666  920123 retry.go:31] will retry after 434.240223ms: ssh: handshake failed: read tcp 127.0.0.1:60732->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:04.019436  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60748->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:04.019473  920123 retry.go:31] will retry after 613.176793ms: ssh: handshake failed: read tcp 127.0.0.1:60748->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:04.633365  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60752->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:04.633465  920123 provision.go:86] duration metric: configureAuth took 2.733349476s
	W1109 00:37:04.633485  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60752->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:04.633494  920123 retry.go:31] will retry after 134.817µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60752->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:04.634592  920123 provision.go:83] configureAuth start
	I1109 00:37:04.634690  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:04.661363  920123 provision.go:138] copyHostCerts
	I1109 00:37:04.661461  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:04.661475  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:04.661541  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:04.661642  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:04.661648  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:04.661669  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:04.661769  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:04.661779  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:04.661799  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:04.661844  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:05.016284  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:05.016400  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:05.016481  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:05.045879  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:05.046797  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:05.046824  920123 retry.go:31] will retry after 304.421971ms: ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:05.352683  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60760->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:05.352708  920123 retry.go:31] will retry after 444.397302ms: ssh: handshake failed: read tcp 127.0.0.1:60760->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:05.797740  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:05.797764  920123 retry.go:31] will retry after 340.347764ms: ssh: handshake failed: EOF
	W1109 00:37:06.139370  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60786->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:06.139413  920123 retry.go:31] will retry after 564.115424ms: ssh: handshake failed: read tcp 127.0.0.1:60786->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:06.704172  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:06.704247  920123 retry.go:31] will retry after 266.320094ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60790->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:06.971549  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:07.018903  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:07.019873  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60796->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:07.019896  920123 retry.go:31] will retry after 136.412445ms: ssh: handshake failed: read tcp 127.0.0.1:60796->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:07.157667  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60812->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:07.157696  920123 retry.go:31] will retry after 219.888783ms: ssh: handshake failed: read tcp 127.0.0.1:60812->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:07.378402  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60814->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:07.378427  920123 retry.go:31] will retry after 775.007468ms: ssh: handshake failed: read tcp 127.0.0.1:60814->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:08.154104  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60820->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:08.154128  920123 retry.go:31] will retry after 502.208989ms: ssh: handshake failed: read tcp 127.0.0.1:60820->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:08.656937  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:08.657025  920123 provision.go:86] duration metric: configureAuth took 4.02241853s
	W1109 00:37:08.657033  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:08.657042  920123 retry.go:31] will retry after 118.8µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:08.658117  920123 provision.go:83] configureAuth start
	I1109 00:37:08.658209  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:08.677397  920123 provision.go:138] copyHostCerts
	I1109 00:37:08.677539  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:08.677551  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:08.677612  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:08.677716  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:08.677722  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:08.677743  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:08.677800  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:08.677806  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:08.677825  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:08.677875  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:09.115050  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:09.115174  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:09.115266  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:09.143977  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:09.144911  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60830->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:09.144939  920123 retry.go:31] will retry after 146.979981ms: ssh: handshake failed: read tcp 127.0.0.1:60830->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:09.292585  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:09.292609  920123 retry.go:31] will retry after 309.623639ms: ssh: handshake failed: EOF
	W1109 00:37:09.603495  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:09.603523  920123 retry.go:31] will retry after 492.233711ms: ssh: handshake failed: EOF
	W1109 00:37:10.096774  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60852->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:10.096814  920123 retry.go:31] will retry after 828.32671ms: ssh: handshake failed: read tcp 127.0.0.1:60852->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:10.925951  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:10.926023  920123 retry.go:31] will retry after 184.01324ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:11.110388  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:11.129389  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:11.130316  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:11.130338  920123 retry.go:31] will retry after 296.292504ms: ssh: handshake failed: EOF
	W1109 00:37:11.427264  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:11.427295  920123 retry.go:31] will retry after 315.025877ms: ssh: handshake failed: EOF
	W1109 00:37:11.742979  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60886->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:11.743006  920123 retry.go:31] will retry after 668.450393ms: ssh: handshake failed: read tcp 127.0.0.1:60886->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:12.412251  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:12.412330  920123 provision.go:86] duration metric: configureAuth took 3.754195027s
	W1109 00:37:12.412343  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:12.412353  920123 retry.go:31] will retry after 203.915µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:12.413498  920123 provision.go:83] configureAuth start
	I1109 00:37:12.413593  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:12.443080  920123 provision.go:138] copyHostCerts
	I1109 00:37:12.443150  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:12.443163  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:12.443220  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:12.443316  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:12.443323  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:12.443346  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:12.443474  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:12.443486  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:12.443511  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:12.443609  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:13.451710  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:13.451821  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:13.451905  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:13.470475  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:13.471295  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44358->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:13.471313  920123 retry.go:31] will retry after 226.994465ms: ssh: handshake failed: read tcp 127.0.0.1:44358->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:13.699070  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44368->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:13.699096  920123 retry.go:31] will retry after 538.227939ms: ssh: handshake failed: read tcp 127.0.0.1:44368->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:14.237935  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44374->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:14.237962  920123 retry.go:31] will retry after 502.677491ms: ssh: handshake failed: read tcp 127.0.0.1:44374->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:14.741936  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44390->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:14.741972  920123 retry.go:31] will retry after 568.3332ms: ssh: handshake failed: read tcp 127.0.0.1:44390->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:15.311456  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:15.311547  920123 provision.go:86] duration metric: configureAuth took 2.898026552s
	W1109 00:37:15.311569  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:15.311582  920123 retry.go:31] will retry after 438.798µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:15.312687  920123 provision.go:83] configureAuth start
	I1109 00:37:15.312777  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:15.339755  920123 provision.go:138] copyHostCerts
	I1109 00:37:15.339826  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:15.339835  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:15.339898  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:15.339990  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:15.340000  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:15.340024  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:15.340082  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:15.340089  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:15.340109  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:15.340171  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:15.718329  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:15.718401  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:15.718447  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:15.742037  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:15.742849  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44406->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:15.742875  920123 retry.go:31] will retry after 161.912212ms: ssh: handshake failed: read tcp 127.0.0.1:44406->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:15.905605  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44410->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:15.905669  920123 retry.go:31] will retry after 266.899558ms: ssh: handshake failed: read tcp 127.0.0.1:44410->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:16.173411  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44422->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:16.173477  920123 retry.go:31] will retry after 425.86632ms: ssh: handshake failed: read tcp 127.0.0.1:44422->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:16.600235  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44438->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:16.600262  920123 retry.go:31] will retry after 848.785584ms: ssh: handshake failed: read tcp 127.0.0.1:44438->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:17.449637  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44450->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:17.449707  920123 retry.go:31] will retry after 135.581291ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44450->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:17.585964  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:17.615231  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:17.616102  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44462->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:17.616125  920123 retry.go:31] will retry after 135.325571ms: ssh: handshake failed: read tcp 127.0.0.1:44462->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:17.752834  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44476->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:17.752859  920123 retry.go:31] will retry after 370.105239ms: ssh: handshake failed: read tcp 127.0.0.1:44476->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:18.124435  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44488->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:18.124465  920123 retry.go:31] will retry after 620.018983ms: ssh: handshake failed: read tcp 127.0.0.1:44488->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:18.745114  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44496->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:18.745146  920123 retry.go:31] will retry after 426.914532ms: ssh: handshake failed: read tcp 127.0.0.1:44496->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:19.172979  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44510->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:19.173069  920123 provision.go:86] duration metric: configureAuth took 3.860366067s
	W1109 00:37:19.173082  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44510->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:19.173094  920123 retry.go:31] will retry after 301.126µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44510->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:19.174162  920123 provision.go:83] configureAuth start
	I1109 00:37:19.174248  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:19.196495  920123 provision.go:138] copyHostCerts
	I1109 00:37:19.196564  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:19.196581  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:19.196662  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:19.196757  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:19.196762  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:19.196789  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:19.196845  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:19.196850  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:19.196873  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:19.196923  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:19.643127  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:19.643202  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:19.643265  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:19.663650  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:19.664514  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:19.664539  920123 retry.go:31] will retry after 332.134484ms: ssh: handshake failed: EOF
	W1109 00:37:20.003123  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:20.003201  920123 retry.go:31] will retry after 280.263679ms: ssh: handshake failed: EOF
	W1109 00:37:20.284246  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44540->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:20.284277  920123 retry.go:31] will retry after 301.72456ms: ssh: handshake failed: read tcp 127.0.0.1:44540->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:20.587161  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44546->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:20.587199  920123 retry.go:31] will retry after 869.46428ms: ssh: handshake failed: read tcp 127.0.0.1:44546->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:21.457708  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44556->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:21.457808  920123 provision.go:86] duration metric: configureAuth took 2.283628247s
	W1109 00:37:21.457822  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44556->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:21.457835  920123 retry.go:31] will retry after 1.112555ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44556->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:21.459995  920123 provision.go:83] configureAuth start
	I1109 00:37:21.460075  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:21.478978  920123 provision.go:138] copyHostCerts
	I1109 00:37:21.479043  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:21.479054  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:21.479129  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:21.479234  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:21.479240  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:21.479265  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:21.479327  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:21.479332  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:21.479355  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:21.479420  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:22.282154  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:22.282237  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:22.282279  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:22.305125  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:22.306010  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:22.306030  920123 retry.go:31] will retry after 160.585491ms: ssh: handshake failed: EOF
	W1109 00:37:22.467831  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:22.467860  920123 retry.go:31] will retry after 378.48026ms: ssh: handshake failed: EOF
	W1109 00:37:22.847749  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:22.847774  920123 retry.go:31] will retry after 639.205412ms: ssh: handshake failed: EOF
	W1109 00:37:23.487842  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60592->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:23.487912  920123 retry.go:31] will retry after 189.43241ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60592->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:23.678366  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:23.696420  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:23.697288  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60602->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:23.697307  920123 retry.go:31] will retry after 229.846524ms: ssh: handshake failed: read tcp 127.0.0.1:60602->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:23.928099  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:23.928129  920123 retry.go:31] will retry after 535.644014ms: ssh: handshake failed: EOF
	W1109 00:37:24.465282  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60626->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:24.465312  920123 retry.go:31] will retry after 565.934901ms: ssh: handshake failed: read tcp 127.0.0.1:60626->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:25.032843  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60632->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:25.032939  920123 provision.go:86] duration metric: configureAuth took 3.572929225s
	W1109 00:37:25.032952  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60632->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:25.032965  920123 retry.go:31] will retry after 1.280158ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60632->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:25.035172  920123 provision.go:83] configureAuth start
	I1109 00:37:25.035292  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:25.054772  920123 provision.go:138] copyHostCerts
	I1109 00:37:25.054852  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:25.054863  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:25.054930  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:25.055045  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:25.055064  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:25.055091  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:25.055162  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:25.055171  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:25.055192  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:25.055249  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:25.421085  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:25.421172  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:25.421221  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:25.440147  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:25.441154  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:25.441173  920123 retry.go:31] will retry after 207.750323ms: ssh: handshake failed: EOF
	W1109 00:37:25.650073  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:25.650100  920123 retry.go:31] will retry after 505.677079ms: ssh: handshake failed: EOF
	W1109 00:37:26.157300  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60670->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:26.157329  920123 retry.go:31] will retry after 754.051709ms: ssh: handshake failed: read tcp 127.0.0.1:60670->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:26.912612  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60684->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:26.912679  920123 retry.go:31] will retry after 173.456302ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60684->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:27.087106  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:27.118577  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:27.119518  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:27.119538  920123 retry.go:31] will retry after 131.007859ms: ssh: handshake failed: EOF
	W1109 00:37:27.251210  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:27.251241  920123 retry.go:31] will retry after 420.575945ms: ssh: handshake failed: EOF
	W1109 00:37:27.673401  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:27.673482  920123 retry.go:31] will retry after 406.290272ms: ssh: handshake failed: EOF
	W1109 00:37:28.080388  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60730->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:28.080422  920123 retry.go:31] will retry after 571.891842ms: ssh: handshake failed: read tcp 127.0.0.1:60730->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:28.653675  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60744->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:28.653755  920123 provision.go:86] duration metric: configureAuth took 3.618555896s
	W1109 00:37:28.653768  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60744->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:28.653780  920123 retry.go:31] will retry after 1.281ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60744->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:28.655952  920123 provision.go:83] configureAuth start
	I1109 00:37:28.656050  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:28.685116  920123 provision.go:138] copyHostCerts
	I1109 00:37:28.685175  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:28.685184  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:28.685246  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:28.685339  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:28.685345  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:28.685366  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:28.685429  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:28.685507  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:28.685537  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:28.685609  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:29.278752  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:29.278832  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:29.278878  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:29.303975  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:29.304800  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:29.304819  920123 retry.go:31] will retry after 129.496684ms: ssh: handshake failed: EOF
	W1109 00:37:29.435433  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:29.435467  920123 retry.go:31] will retry after 439.804855ms: ssh: handshake failed: EOF
	W1109 00:37:29.878470  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60768->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:29.878495  920123 retry.go:31] will retry after 800.535864ms: ssh: handshake failed: read tcp 127.0.0.1:60768->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:30.679614  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:30.679687  920123 retry.go:31] will retry after 188.246585ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60776->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:30.869085  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:30.898525  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:30.899437  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:30.899454  920123 retry.go:31] will retry after 364.717899ms: ssh: handshake failed: EOF
	W1109 00:37:31.265606  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60794->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:31.265651  920123 retry.go:31] will retry after 496.97673ms: ssh: handshake failed: read tcp 127.0.0.1:60794->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:31.763825  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60806->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:31.763857  920123 retry.go:31] will retry after 308.678081ms: ssh: handshake failed: read tcp 127.0.0.1:60806->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:32.074008  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:49902->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:32.074049  920123 retry.go:31] will retry after 735.220048ms: ssh: handshake failed: read tcp 127.0.0.1:49902->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:32.810116  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:49906->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:32.810221  920123 provision.go:86] duration metric: configureAuth took 4.154248506s
	W1109 00:37:32.810233  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49906->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:32.810248  920123 retry.go:31] will retry after 3.396745ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49906->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:32.814372  920123 provision.go:83] configureAuth start
	I1109 00:37:32.814476  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:32.844193  920123 provision.go:138] copyHostCerts
	I1109 00:37:32.844255  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:32.844264  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:32.844325  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:32.844439  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:32.844445  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:32.844596  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:32.844768  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:32.844778  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:32.844863  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:32.845101  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:33.276195  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:33.276268  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:33.276310  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:33.298893  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:33.299781  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:33.299810  920123 retry.go:31] will retry after 253.580465ms: ssh: handshake failed: EOF
	W1109 00:37:33.554694  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:33.554722  920123 retry.go:31] will retry after 382.210852ms: ssh: handshake failed: EOF
	W1109 00:37:33.937680  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:33.937708  920123 retry.go:31] will retry after 722.55606ms: ssh: handshake failed: EOF
	W1109 00:37:34.661809  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:49956->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:34.661840  920123 retry.go:31] will retry after 621.373706ms: ssh: handshake failed: read tcp 127.0.0.1:49956->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:35.284031  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:49970->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:35.284115  920123 provision.go:86] duration metric: configureAuth took 2.469718449s
	W1109 00:37:35.284123  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49970->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:35.284133  920123 retry.go:31] will retry after 3.011711ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49970->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:35.287246  920123 provision.go:83] configureAuth start
	I1109 00:37:35.287338  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:35.310957  920123 provision.go:138] copyHostCerts
	I1109 00:37:35.311020  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:35.311085  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:35.311182  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:35.311402  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:35.311518  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:35.313549  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:35.313680  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:35.313688  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:35.313719  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:35.313777  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:35.550429  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:35.550503  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:35.550549  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:35.580474  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:35.581377  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:35.581398  920123 retry.go:31] will retry after 164.932764ms: ssh: handshake failed: EOF
	W1109 00:37:35.747165  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:35.747194  920123 retry.go:31] will retry after 236.010993ms: ssh: handshake failed: EOF
	W1109 00:37:35.984016  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:35.984040  920123 retry.go:31] will retry after 741.853572ms: ssh: handshake failed: EOF
	W1109 00:37:36.727316  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50022->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:36.727393  920123 retry.go:31] will retry after 310.111007ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50022->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:37.037889  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:37.074124  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:37.084512  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:37.084686  920123 retry.go:31] will retry after 224.566206ms: ssh: handshake failed: EOF
	W1109 00:37:37.310590  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:37.310619  920123 retry.go:31] will retry after 224.774717ms: ssh: handshake failed: EOF
	W1109 00:37:37.536883  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50050->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:37.536908  920123 retry.go:31] will retry after 463.519146ms: ssh: handshake failed: read tcp 127.0.0.1:50050->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:38.001327  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50054->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:38.001374  920123 retry.go:31] will retry after 916.236608ms: ssh: handshake failed: read tcp 127.0.0.1:50054->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:38.918823  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:38.918907  920123 provision.go:86] duration metric: configureAuth took 3.631640608s
	W1109 00:37:38.918918  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:38.918929  920123 retry.go:31] will retry after 2.997172ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:38.922097  920123 provision.go:83] configureAuth start
	I1109 00:37:38.922181  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:38.940508  920123 provision.go:138] copyHostCerts
	I1109 00:37:38.940582  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:38.940594  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:38.940660  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:38.940760  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:38.940772  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:38.940802  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:38.940860  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:38.940873  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:38.940899  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:38.940948  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:39.203042  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:39.203162  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:39.203255  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:39.223708  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:39.224662  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:39.224688  920123 retry.go:31] will retry after 190.282553ms: ssh: handshake failed: EOF
	W1109 00:37:39.416443  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:39.416474  920123 retry.go:31] will retry after 459.805372ms: ssh: handshake failed: EOF
	W1109 00:37:39.877429  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50090->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:39.877481  920123 retry.go:31] will retry after 677.150944ms: ssh: handshake failed: read tcp 127.0.0.1:50090->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:40.556076  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50106->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:40.556149  920123 retry.go:31] will retry after 199.764794ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50106->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:40.756378  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:40.808422  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:40.809257  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:40.809283  920123 retry.go:31] will retry after 358.074824ms: ssh: handshake failed: EOF
	W1109 00:37:41.167977  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:41.168015  920123 retry.go:31] will retry after 217.457476ms: ssh: handshake failed: EOF
	W1109 00:37:41.386809  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:41.386838  920123 retry.go:31] will retry after 500.483098ms: ssh: handshake failed: EOF
	W1109 00:37:41.887848  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:41.887878  920123 retry.go:31] will retry after 572.07432ms: ssh: handshake failed: EOF
	W1109 00:37:42.460546  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:42.460634  920123 provision.go:86] duration metric: configureAuth took 3.53852105s
	W1109 00:37:42.460648  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:42.460659  920123 retry.go:31] will retry after 6.224811ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:42.467799  920123 provision.go:83] configureAuth start
	I1109 00:37:42.467900  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:42.503074  920123 provision.go:138] copyHostCerts
	I1109 00:37:42.503136  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:42.503144  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:42.503221  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:42.503310  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:42.503315  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:42.503340  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:42.503393  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:42.503398  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:42.503420  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:42.503472  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:42.882252  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:42.882370  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:42.882435  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:42.903090  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:42.903968  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:42.903986  920123 retry.go:31] will retry after 363.952088ms: ssh: handshake failed: EOF
	W1109 00:37:43.269296  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57468->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:43.269323  920123 retry.go:31] will retry after 193.051794ms: ssh: handshake failed: read tcp 127.0.0.1:57468->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:43.462974  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57476->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:43.462998  920123 retry.go:31] will retry after 354.850808ms: ssh: handshake failed: read tcp 127.0.0.1:57476->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:43.819125  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57484->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:43.819158  920123 retry.go:31] will retry after 814.973519ms: ssh: handshake failed: read tcp 127.0.0.1:57484->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:44.635580  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:44.635659  920123 provision.go:86] duration metric: configureAuth took 2.167837128s
	W1109 00:37:44.635671  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:44.635682  920123 retry.go:31] will retry after 10.981965ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:44.646859  920123 provision.go:83] configureAuth start
	I1109 00:37:44.646951  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:44.680498  920123 provision.go:138] copyHostCerts
	I1109 00:37:44.680561  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:44.680570  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:44.680633  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:44.680728  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:44.680733  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:44.680754  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:44.680805  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:44.680810  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:44.680829  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:44.680869  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:44.931506  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:44.931617  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:44.931691  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:44.952599  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:44.953558  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:44.953577  920123 retry.go:31] will retry after 221.209398ms: ssh: handshake failed: EOF
	W1109 00:37:45.175906  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:45.175938  920123 retry.go:31] will retry after 438.469568ms: ssh: handshake failed: EOF
	W1109 00:37:45.614995  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:45.615019  920123 retry.go:31] will retry after 790.703763ms: ssh: handshake failed: EOF
	W1109 00:37:46.406311  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57526->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:46.406353  920123 retry.go:31] will retry after 425.129319ms: ssh: handshake failed: read tcp 127.0.0.1:57526->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:46.832081  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:46.832158  920123 provision.go:86] duration metric: configureAuth took 2.185280045s
	W1109 00:37:46.832164  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:46.832173  920123 retry.go:31] will retry after 15.554751ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:46.848344  920123 provision.go:83] configureAuth start
	I1109 00:37:46.848442  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:46.870913  920123 provision.go:138] copyHostCerts
	I1109 00:37:46.870978  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:46.870986  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:46.871049  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:46.871165  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:46.871170  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:46.871190  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:46.871239  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:46.871244  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:46.871262  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:46.871301  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:47.834054  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:47.834178  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:47.834258  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:47.856766  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:47.857690  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:47.857707  920123 retry.go:31] will retry after 136.88442ms: ssh: handshake failed: EOF
	W1109 00:37:47.995404  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57558->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:47.995442  920123 retry.go:31] will retry after 270.455859ms: ssh: handshake failed: read tcp 127.0.0.1:57558->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:48.267332  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:48.267358  920123 retry.go:31] will retry after 786.823774ms: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:49.054739  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57586->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:49.054765  920123 retry.go:31] will retry after 521.498357ms: ssh: handshake failed: read tcp 127.0.0.1:57586->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:49.576836  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:49.576915  920123 provision.go:86] duration metric: configureAuth took 2.728547817s
	W1109 00:37:49.576931  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:49.576941  920123 retry.go:31] will retry after 19.435818ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:49.597128  920123 provision.go:83] configureAuth start
	I1109 00:37:49.597225  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:49.618085  920123 provision.go:138] copyHostCerts
	I1109 00:37:49.618144  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:49.618153  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:49.618211  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:49.618302  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:49.618308  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:49.618327  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:49.618374  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:49.618379  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:49.618396  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:49.618436  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:49.935854  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:49.935924  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:49.935973  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:49.960333  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:49.961200  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:49.961221  920123 retry.go:31] will retry after 221.67573ms: ssh: handshake failed: EOF
	W1109 00:37:50.184172  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:50.184201  920123 retry.go:31] will retry after 279.15097ms: ssh: handshake failed: EOF
	W1109 00:37:50.463951  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57630->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:50.463983  920123 retry.go:31] will retry after 410.388185ms: ssh: handshake failed: read tcp 127.0.0.1:57630->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:50.874990  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:50.875020  920123 retry.go:31] will retry after 702.290278ms: ssh: handshake failed: EOF
	W1109 00:37:51.577976  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57650->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:51.578052  920123 retry.go:31] will retry after 267.624065ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:57650->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:51.846527  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:51.867300  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:51.868166  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:51.868188  920123 retry.go:31] will retry after 334.386495ms: ssh: handshake failed: EOF
	W1109 00:37:52.203142  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40818->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:52.203173  920123 retry.go:31] will retry after 519.39714ms: ssh: handshake failed: read tcp 127.0.0.1:40818->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:52.723216  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40828->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:52.723247  920123 retry.go:31] will retry after 544.058418ms: ssh: handshake failed: read tcp 127.0.0.1:40828->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:53.268010  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40838->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:53.268102  920123 provision.go:86] duration metric: configureAuth took 3.670952592s
	W1109 00:37:53.268110  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40838->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:53.268119  920123 retry.go:31] will retry after 28.399803ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40838->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:53.297323  920123 provision.go:83] configureAuth start
	I1109 00:37:53.297426  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:53.320170  920123 provision.go:138] copyHostCerts
	I1109 00:37:53.320234  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:53.320244  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:53.320305  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:53.320393  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:53.320398  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:53.320419  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:53.320467  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:53.320472  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:53.320489  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:53.320530  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:54.054797  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:54.054916  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:54.054996  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:54.077191  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:54.078089  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:54.078115  920123 retry.go:31] will retry after 236.076035ms: ssh: handshake failed: EOF
	W1109 00:37:54.314779  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:54.314807  920123 retry.go:31] will retry after 478.965051ms: ssh: handshake failed: EOF
	W1109 00:37:54.795037  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:54.795079  920123 retry.go:31] will retry after 624.971302ms: ssh: handshake failed: EOF
	W1109 00:37:55.421169  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40868->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:55.421242  920123 retry.go:31] will retry after 272.828007ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40868->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:55.694727  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:55.727088  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:55.728232  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40870->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:55.728259  920123 retry.go:31] will retry after 257.847974ms: ssh: handshake failed: read tcp 127.0.0.1:40870->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:55.987067  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40886->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:55.987102  920123 retry.go:31] will retry after 466.393097ms: ssh: handshake failed: read tcp 127.0.0.1:40886->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:56.454966  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40892->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:56.454991  920123 retry.go:31] will retry after 604.933247ms: ssh: handshake failed: read tcp 127.0.0.1:40892->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:57.061223  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:57.061305  920123 provision.go:86] duration metric: configureAuth took 3.763959182s
	W1109 00:37:57.061318  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:57.061329  920123 retry.go:31] will retry after 35.475573ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:37:57.097478  920123 provision.go:83] configureAuth start
	I1109 00:37:57.097578  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:57.128535  920123 provision.go:138] copyHostCerts
	I1109 00:37:57.128608  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:57.128623  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:57.128692  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:57.128795  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:57.128807  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:57.128830  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:57.128888  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:57.128897  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:57.128917  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:57.128966  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:37:57.772361  920123 provision.go:172] copyRemoteCerts
	I1109 00:37:57.772439  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:37:57.772498  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:37:57.813531  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:37:57.814396  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:57.814426  920123 retry.go:31] will retry after 181.744823ms: ssh: handshake failed: EOF
	W1109 00:37:57.997219  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40930->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:57.997248  920123 retry.go:31] will retry after 222.891413ms: ssh: handshake failed: read tcp 127.0.0.1:40930->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:58.220950  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:37:58.220979  920123 retry.go:31] will retry after 374.669577ms: ssh: handshake failed: EOF
	W1109 00:37:58.596907  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40948->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:58.596933  920123 retry.go:31] will retry after 1.148383575s: ssh: handshake failed: read tcp 127.0.0.1:40948->127.0.0.1:33939: read: connection reset by peer
	W1109 00:37:59.745944  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40952->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:59.746027  920123 provision.go:86] duration metric: configureAuth took 2.648520355s
	W1109 00:37:59.746040  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40952->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:59.746052  920123 retry.go:31] will retry after 104.634186ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40952->127.0.0.1:33939: read: connection reset by peer
	I1109 00:37:59.851394  920123 provision.go:83] configureAuth start
	I1109 00:37:59.851497  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:37:59.887463  920123 provision.go:138] copyHostCerts
	I1109 00:37:59.887543  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:37:59.887553  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:37:59.887618  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:37:59.887721  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:37:59.887727  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:37:59.887750  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:37:59.887803  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:37:59.887808  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:37:59.887827  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:37:59.887867  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:01.008488  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:01.008576  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:01.008632  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:01.045558  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:01.046411  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40954->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:01.046443  920123 retry.go:31] will retry after 127.7595ms: ssh: handshake failed: read tcp 127.0.0.1:40954->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:01.175145  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40958->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:01.175174  920123 retry.go:31] will retry after 402.049223ms: ssh: handshake failed: read tcp 127.0.0.1:40958->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:01.578078  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40974->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:01.578108  920123 retry.go:31] will retry after 793.69003ms: ssh: handshake failed: read tcp 127.0.0.1:40974->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:02.372989  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44570->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:02.373063  920123 retry.go:31] will retry after 337.419662ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44570->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:02.711650  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:02.730463  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:02.731369  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44574->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:02.731396  920123 retry.go:31] will retry after 237.803889ms: ssh: handshake failed: read tcp 127.0.0.1:44574->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:02.970299  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44584->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:02.970332  920123 retry.go:31] will retry after 513.193691ms: ssh: handshake failed: read tcp 127.0.0.1:44584->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:03.484951  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:03.484980  920123 retry.go:31] will retry after 674.182631ms: ssh: handshake failed: EOF
	W1109 00:38:04.159819  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44606->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:04.159898  920123 provision.go:86] duration metric: configureAuth took 4.308475745s
	W1109 00:38:04.159906  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44606->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:04.159916  920123 retry.go:31] will retry after 184.660335ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44606->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:04.345155  920123 provision.go:83] configureAuth start
	I1109 00:38:04.345249  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:04.379410  920123 provision.go:138] copyHostCerts
	I1109 00:38:04.379486  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:04.379496  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:04.379563  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:04.379658  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:04.379664  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:04.379685  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:04.379737  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:04.379746  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:04.379765  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:04.379823  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:05.183853  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:05.183935  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:05.183983  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:05.213422  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:05.214335  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:05.214357  920123 retry.go:31] will retry after 303.234534ms: ssh: handshake failed: EOF
	W1109 00:38:05.519025  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44628->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:05.519061  920123 retry.go:31] will retry after 539.890752ms: ssh: handshake failed: read tcp 127.0.0.1:44628->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:06.059738  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:06.059771  920123 retry.go:31] will retry after 773.394632ms: ssh: handshake failed: EOF
	W1109 00:38:06.833804  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44636->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:06.833876  920123 retry.go:31] will retry after 193.670808ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44636->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:07.028319  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:07.046765  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:07.047651  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44644->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:07.047678  920123 retry.go:31] will retry after 163.796651ms: ssh: handshake failed: read tcp 127.0.0.1:44644->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:07.212473  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:07.212497  920123 retry.go:31] will retry after 205.030507ms: ssh: handshake failed: EOF
	W1109 00:38:07.418144  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44658->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:07.418179  920123 retry.go:31] will retry after 819.184885ms: ssh: handshake failed: read tcp 127.0.0.1:44658->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:08.238077  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44662->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:08.238116  920123 retry.go:31] will retry after 461.982356ms: ssh: handshake failed: read tcp 127.0.0.1:44662->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:08.701255  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:08.701335  920123 provision.go:86] duration metric: configureAuth took 4.35615278s
	W1109 00:38:08.701347  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:08.701357  920123 retry.go:31] will retry after 178.93408ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:08.880639  920123 provision.go:83] configureAuth start
	I1109 00:38:08.880726  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:08.905643  920123 provision.go:138] copyHostCerts
	I1109 00:38:08.905978  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:08.905999  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:08.906065  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:08.906217  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:08.913552  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:08.913650  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:08.913776  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:08.913789  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:08.913814  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:08.913868  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:09.271033  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:09.271146  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:09.271206  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:09.290074  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:09.290936  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44682->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:09.290964  920123 retry.go:31] will retry after 196.884646ms: ssh: handshake failed: read tcp 127.0.0.1:44682->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:09.488856  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:09.488887  920123 retry.go:31] will retry after 273.444209ms: ssh: handshake failed: EOF
	W1109 00:38:09.763651  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44696->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:09.763682  920123 retry.go:31] will retry after 614.944112ms: ssh: handshake failed: read tcp 127.0.0.1:44696->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:10.380123  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44706->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:10.380162  920123 retry.go:31] will retry after 492.854907ms: ssh: handshake failed: read tcp 127.0.0.1:44706->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:10.874105  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44718->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:10.874182  920123 retry.go:31] will retry after 212.44462ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44718->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:11.087648  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:11.107114  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:11.108020  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:11.108045  920123 retry.go:31] will retry after 338.198302ms: ssh: handshake failed: EOF
	W1109 00:38:11.446823  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44732->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:11.446849  920123 retry.go:31] will retry after 215.467808ms: ssh: handshake failed: read tcp 127.0.0.1:44732->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:11.663786  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:11.663811  920123 retry.go:31] will retry after 766.639036ms: ssh: handshake failed: EOF
	W1109 00:38:12.431153  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:12.431182  920123 retry.go:31] will retry after 499.03677ms: ssh: handshake failed: EOF
	W1109 00:38:12.930823  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34006->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:12.930905  920123 provision.go:86] duration metric: configureAuth took 4.050244174s
	W1109 00:38:12.930917  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34006->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:12.930928  920123 retry.go:31] will retry after 387.121734ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34006->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:13.319120  920123 provision.go:83] configureAuth start
	I1109 00:38:13.319218  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:13.337686  920123 provision.go:138] copyHostCerts
	I1109 00:38:13.337759  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:13.337772  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:13.337839  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:13.337933  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:13.337942  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:13.337965  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:13.338021  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:13.338030  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:13.338049  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:13.338097  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:14.351130  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:14.351200  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:14.351250  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:14.370933  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:14.371821  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:14.371842  920123 retry.go:31] will retry after 307.586436ms: ssh: handshake failed: EOF
	W1109 00:38:14.680729  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34026->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:14.680757  920123 retry.go:31] will retry after 371.790273ms: ssh: handshake failed: read tcp 127.0.0.1:34026->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:15.054238  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:15.054265  920123 retry.go:31] will retry after 601.912941ms: ssh: handshake failed: EOF
	W1109 00:38:15.657606  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:15.657676  920123 retry.go:31] will retry after 341.41482ms: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:16.000440  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:16.027556  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:16.028480  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34056->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:16.028508  920123 retry.go:31] will retry after 137.05487ms: ssh: handshake failed: read tcp 127.0.0.1:34056->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:16.166255  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34060->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:16.166284  920123 retry.go:31] will retry after 439.437818ms: ssh: handshake failed: read tcp 127.0.0.1:34060->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:16.607057  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34070->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:16.607087  920123 retry.go:31] will retry after 820.221606ms: ssh: handshake failed: read tcp 127.0.0.1:34070->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:17.427894  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:17.427974  920123 provision.go:86] duration metric: configureAuth took 4.108826779s
	W1109 00:38:17.427981  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:17.427990  920123 retry.go:31] will retry after 431.833322ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:17.860548  920123 provision.go:83] configureAuth start
	I1109 00:38:17.860638  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:17.885157  920123 provision.go:138] copyHostCerts
	I1109 00:38:17.885225  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:17.885234  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:17.885290  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:17.885386  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:17.885392  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:17.885412  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:17.885573  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:17.885580  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:17.885603  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:17.885651  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:18.502482  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:18.502595  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:18.502670  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:18.535549  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:18.536384  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:18.536405  920123 retry.go:31] will retry after 179.066369ms: ssh: handshake failed: EOF
	W1109 00:38:18.716025  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:18.716048  920123 retry.go:31] will retry after 275.140496ms: ssh: handshake failed: EOF
	W1109 00:38:18.991885  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34112->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:18.991910  920123 retry.go:31] will retry after 612.619284ms: ssh: handshake failed: read tcp 127.0.0.1:34112->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:19.605115  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34122->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:19.605189  920123 retry.go:31] will retry after 302.869823ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34122->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:19.908976  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:19.928127  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:19.929131  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:19.929158  920123 retry.go:31] will retry after 364.231335ms: ssh: handshake failed: EOF
	W1109 00:38:20.294860  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34138->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:20.294893  920123 retry.go:31] will retry after 301.767947ms: ssh: handshake failed: read tcp 127.0.0.1:34138->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:20.597862  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34152->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:20.597893  920123 retry.go:31] will retry after 598.280642ms: ssh: handshake failed: read tcp 127.0.0.1:34152->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:21.197659  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:21.197688  920123 retry.go:31] will retry after 584.165304ms: ssh: handshake failed: EOF
	W1109 00:38:21.782838  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:21.782926  920123 provision.go:86] duration metric: configureAuth took 3.92235444s
	W1109 00:38:21.782939  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:21.782950  920123 retry.go:31] will retry after 1.047175254s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:22.830308  920123 provision.go:83] configureAuth start
	I1109 00:38:22.830408  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:22.848551  920123 provision.go:138] copyHostCerts
	I1109 00:38:22.848622  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:22.848635  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:22.848699  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:22.848791  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:22.848802  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:22.848825  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:22.848875  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:22.848884  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:22.848904  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:22.848946  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:23.221768  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:23.221844  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:23.221886  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:23.241134  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:23.242049  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50226->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:23.242073  920123 retry.go:31] will retry after 178.45586ms: ssh: handshake failed: read tcp 127.0.0.1:50226->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:23.421921  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:23.421950  920123 retry.go:31] will retry after 258.067146ms: ssh: handshake failed: EOF
	W1109 00:38:23.680684  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50246->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:23.680714  920123 retry.go:31] will retry after 841.402181ms: ssh: handshake failed: read tcp 127.0.0.1:50246->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:24.522796  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50260->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:24.522870  920123 retry.go:31] will retry after 295.540697ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50260->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:24.819577  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:24.839043  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:24.839932  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:24.839952  920123 retry.go:31] will retry after 268.081882ms: ssh: handshake failed: EOF
	W1109 00:38:25.108796  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:25.108827  920123 retry.go:31] will retry after 536.826532ms: ssh: handshake failed: EOF
	W1109 00:38:25.647041  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50288->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:25.647079  920123 retry.go:31] will retry after 787.887447ms: ssh: handshake failed: read tcp 127.0.0.1:50288->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:26.436474  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:26.436561  920123 provision.go:86] duration metric: configureAuth took 3.606220961s
	W1109 00:38:26.436575  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:26.436585  920123 retry.go:31] will retry after 900.788797ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:27.337539  920123 provision.go:83] configureAuth start
	I1109 00:38:27.337648  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:27.362444  920123 provision.go:138] copyHostCerts
	I1109 00:38:27.362541  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:27.362566  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:27.362664  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:27.362781  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:27.362795  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:27.362829  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:27.362905  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:27.362914  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:27.362948  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:27.363012  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:27.926048  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:27.926134  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:27.926200  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:27.945360  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:27.946261  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50296->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:27.946289  920123 retry.go:31] will retry after 237.825628ms: ssh: handshake failed: read tcp 127.0.0.1:50296->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:28.185044  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50298->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:28.185076  920123 retry.go:31] will retry after 560.567931ms: ssh: handshake failed: read tcp 127.0.0.1:50298->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:28.746781  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:28.746810  920123 retry.go:31] will retry after 659.383692ms: ssh: handshake failed: EOF
	W1109 00:38:29.406885  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50322->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:29.406959  920123 retry.go:31] will retry after 241.464279ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50322->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:29.649497  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:29.668393  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:29.669351  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:29.669382  920123 retry.go:31] will retry after 351.105511ms: ssh: handshake failed: EOF
	W1109 00:38:30.032117  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50332->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:30.032152  920123 retry.go:31] will retry after 219.721748ms: ssh: handshake failed: read tcp 127.0.0.1:50332->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:30.253172  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50340->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:30.253202  920123 retry.go:31] will retry after 721.952295ms: ssh: handshake failed: read tcp 127.0.0.1:50340->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:30.976212  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50352->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:30.976297  920123 provision.go:86] duration metric: configureAuth took 3.638730031s
	W1109 00:38:30.976310  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50352->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:30.976320  920123 retry.go:31] will retry after 2.316104932s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:50352->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:33.293504  920123 provision.go:83] configureAuth start
	I1109 00:38:33.293596  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:33.321347  920123 provision.go:138] copyHostCerts
	I1109 00:38:33.321412  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:33.321422  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:33.321518  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:33.321608  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:33.321618  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:33.321640  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:33.321722  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:33.321727  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:33.321746  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:33.321788  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:34.347944  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:34.348022  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:34.348066  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:34.369975  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:34.370857  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35394->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:34.370875  920123 retry.go:31] will retry after 309.019629ms: ssh: handshake failed: read tcp 127.0.0.1:35394->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:34.680700  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:34.680730  920123 retry.go:31] will retry after 455.368462ms: ssh: handshake failed: EOF
	W1109 00:38:35.136646  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:35.136670  920123 retry.go:31] will retry after 498.133714ms: ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:35.635542  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35430->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:35.635622  920123 retry.go:31] will retry after 146.129582ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35430->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:35.781923  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:35.800509  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:35.801550  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:35.801574  920123 retry.go:31] will retry after 364.239402ms: ssh: handshake failed: EOF
	W1109 00:38:36.166581  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:36.166612  920123 retry.go:31] will retry after 377.466147ms: ssh: handshake failed: EOF
	W1109 00:38:36.545753  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35452->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:36.545789  920123 retry.go:31] will retry after 515.808768ms: ssh: handshake failed: read tcp 127.0.0.1:35452->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:37.062877  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:37.062962  920123 provision.go:86] duration metric: configureAuth took 3.769435197s
	W1109 00:38:37.062974  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:37.062987  920123 retry.go:31] will retry after 3.240099524s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:40.303233  920123 provision.go:83] configureAuth start
	I1109 00:38:40.303331  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:40.321896  920123 provision.go:138] copyHostCerts
	I1109 00:38:40.321973  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:40.321987  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:40.322055  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:40.322181  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:40.322192  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:40.322216  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:40.322272  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:40.322279  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:40.322298  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:40.322344  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:41.545577  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:41.545650  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:41.545691  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:41.564461  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:41.565420  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:41.565525  920123 retry.go:31] will retry after 177.976796ms: ssh: handshake failed: EOF
	W1109 00:38:41.744383  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:41.744424  920123 retry.go:31] will retry after 431.624459ms: ssh: handshake failed: EOF
	W1109 00:38:42.177533  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44392->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:42.177567  920123 retry.go:31] will retry after 509.268984ms: ssh: handshake failed: read tcp 127.0.0.1:44392->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:42.687754  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44404->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:42.687826  920123 retry.go:31] will retry after 338.247319ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44404->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:43.026324  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:43.049712  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:43.050683  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44414->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:43.050710  920123 retry.go:31] will retry after 192.662307ms: ssh: handshake failed: read tcp 127.0.0.1:44414->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:43.244608  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:43.244636  920123 retry.go:31] will retry after 198.27833ms: ssh: handshake failed: EOF
	W1109 00:38:43.444593  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44446->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:43.444620  920123 retry.go:31] will retry after 310.249488ms: ssh: handshake failed: read tcp 127.0.0.1:44446->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:43.755555  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44462->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:43.755584  920123 retry.go:31] will retry after 503.072654ms: ssh: handshake failed: read tcp 127.0.0.1:44462->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:44.259673  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44468->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:44.259752  920123 provision.go:86] duration metric: configureAuth took 3.956490319s
	W1109 00:38:44.259766  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44468->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:44.259777  920123 retry.go:31] will retry after 5.396856667s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44468->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:49.661488  920123 provision.go:83] configureAuth start
	I1109 00:38:49.661594  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:49.679886  920123 provision.go:138] copyHostCerts
	I1109 00:38:49.679959  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:49.679973  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:49.680050  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:49.680147  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:49.680156  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:49.680184  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:49.680239  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:49.680247  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:49.680274  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:49.680321  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:49.873461  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:49.873535  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:49.873578  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:49.897960  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:49.898810  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:49.898830  920123 retry.go:31] will retry after 193.195808ms: ssh: handshake failed: EOF
	W1109 00:38:50.092709  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44482->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:50.092741  920123 retry.go:31] will retry after 545.432455ms: ssh: handshake failed: read tcp 127.0.0.1:44482->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:50.638897  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:50.638935  920123 retry.go:31] will retry after 376.371646ms: ssh: handshake failed: EOF
	W1109 00:38:51.015922  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:51.015958  920123 retry.go:31] will retry after 765.124548ms: ssh: handshake failed: EOF
	W1109 00:38:51.781757  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:51.781836  920123 provision.go:86] duration metric: configureAuth took 2.12032014s
	W1109 00:38:51.781846  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:51.781858  920123 retry.go:31] will retry after 4.301528284s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:56.083586  920123 provision.go:83] configureAuth start
	I1109 00:38:56.083715  920123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-881977
	I1109 00:38:56.101995  920123 provision.go:138] copyHostCerts
	I1109 00:38:56.102067  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem, removing ...
	I1109 00:38:56.102080  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem
	I1109 00:38:56.102159  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/ca.pem (1078 bytes)
	I1109 00:38:56.102255  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem, removing ...
	I1109 00:38:56.102261  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem
	I1109 00:38:56.102287  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/cert.pem (1123 bytes)
	I1109 00:38:56.102375  920123 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem, removing ...
	I1109 00:38:56.102380  920123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem
	I1109 00:38:56.102402  920123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-749551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-749551/.minikube/key.pem (1679 bytes)
	I1109 00:38:56.102444  920123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-749551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-749551/.minikube/certs/ca-key.pem org=jenkins.no-preload-881977 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-881977]
	I1109 00:38:56.977773  920123 provision.go:172] copyRemoteCerts
	I1109 00:38:56.977849  920123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 00:38:56.977889  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:57.001109  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:57.002044  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:57.002068  920123 retry.go:31] will retry after 286.010204ms: ssh: handshake failed: EOF
	W1109 00:38:57.289089  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:57.289118  920123 retry.go:31] will retry after 493.895193ms: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:33939: read: connection reset by peer
	W1109 00:38:57.784278  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:57.784308  920123 retry.go:31] will retry after 302.762588ms: ssh: handshake failed: EOF
	W1109 00:38:58.088299  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:58.088330  920123 retry.go:31] will retry after 744.102882ms: ssh: handshake failed: EOF
	W1109 00:38:58.833703  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:58.833775  920123 provision.go:86] duration metric: configureAuth took 2.750160833s
	W1109 00:38:58.833781  920123 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:58.833791  920123 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:38:58.833799  920123 machine.go:91] provisioned docker machine in 7m57.146556619s
	I1109 00:38:58.833865  920123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 00:38:58.833911  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:38:58.876231  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:38:58.877154  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:58.877171  920123 retry.go:31] will retry after 143.707598ms: ssh: handshake failed: EOF
	W1109 00:38:59.021877  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:38:59.021900  920123 retry.go:31] will retry after 538.395272ms: ssh: handshake failed: EOF
	W1109 00:38:59.560984  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39680->127.0.0.1:33939: read: connection reset by peer
	I1109 00:38:59.561010  920123 retry.go:31] will retry after 698.635452ms: ssh: handshake failed: read tcp 127.0.0.1:39680->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:00.260610  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39686->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:00.260682  920123 retry.go:31] will retry after 215.033079ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39686->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:00.475997  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:39:00.498793  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:39:00.499845  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39700->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:00.499871  920123 retry.go:31] will retry after 199.097386ms: ssh: handshake failed: read tcp 127.0.0.1:39700->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:00.699638  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39708->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:00.699665  920123 retry.go:31] will retry after 295.539337ms: ssh: handshake failed: read tcp 127.0.0.1:39708->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:00.996626  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39710->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:00.996656  920123 retry.go:31] will retry after 544.260878ms: ssh: handshake failed: read tcp 127.0.0.1:39710->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:01.541995  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39722->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:01.542027  920123 retry.go:31] will retry after 521.897179ms: ssh: handshake failed: read tcp 127.0.0.1:39722->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:02.065359  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40400->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:02.065457  920123 start.go:275] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40400->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:02.065472  920123 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40400->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:02.065532  920123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 00:39:02.065587  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:39:02.085830  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:39:02.086814  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40408->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:02.086841  920123 retry.go:31] will retry after 370.640248ms: ssh: handshake failed: read tcp 127.0.0.1:40408->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:02.458903  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:39:02.458933  920123 retry.go:31] will retry after 225.865628ms: ssh: handshake failed: EOF
	W1109 00:39:02.686032  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40422->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:02.686062  920123 retry.go:31] will retry after 379.300385ms: ssh: handshake failed: read tcp 127.0.0.1:40422->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:03.066031  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40426->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:03.066061  920123 retry.go:31] will retry after 761.646436ms: ssh: handshake failed: read tcp 127.0.0.1:40426->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:03.828433  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40438->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:03.828496  920123 retry.go:31] will retry after 154.572717ms: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40438->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:03.983911  920123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-881977
	I1109 00:39:04.005567  920123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33939 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/no-preload-881977/id_rsa Username:docker}
	W1109 00:39:04.006467  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 00:39:04.006490  920123 retry.go:31] will retry after 327.342873ms: ssh: handshake failed: EOF
	W1109 00:39:04.334391  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40458->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:04.334425  920123 retry.go:31] will retry after 480.417274ms: ssh: handshake failed: read tcp 127.0.0.1:40458->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:04.815504  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40464->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:04.815530  920123 retry.go:31] will retry after 822.303541ms: ssh: handshake failed: read tcp 127.0.0.1:40464->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:05.638430  920123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:40466->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:05.638511  920123 start.go:290] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40466->127.0.0.1:33939: read: connection reset by peer
	W1109 00:39:05.638524  920123 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40466->127.0.0.1:33939: read: connection reset by peer
	I1109 00:39:05.638530  920123 fix.go:56] fixHost completed within 8m3.977483165s
	I1109 00:39:05.638537  920123 start.go:83] releasing machines lock for "no-preload-881977", held for 8m3.977513114s
	W1109 00:39:05.638615  920123 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-881977" may fix it: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	* Failed to start docker container. Running "minikube delete -p no-preload-881977" may fix it: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	I1109 00:39:05.641301  920123 out.go:177] 
	W1109 00:39:05.643334  920123 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: EOF
	W1109 00:39:05.643357  920123 out.go:239] * 
	* 
	W1109 00:39:05.644318  920123 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 00:39:05.648477  920123 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-881977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-881977
helpers_test.go:235: (dbg) docker inspect no-preload-881977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da",
	        "Created": "2023-11-09T00:18:52.867772019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T00:18:53.523413944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hosts",
	        "LogPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da-json.log",
	        "Name": "/no-preload-881977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-881977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-881977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-881977",
	                "Source": "/var/lib/docker/volumes/no-preload-881977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-881977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-881977",
	                "name.minikube.sigs.k8s.io": "no-preload-881977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3350f4a343f5fbbf1a16ea28cd8d9da2fc351f6b6c3d0a1efb567f98ee875ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3350f4a343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-881977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3fdef2a329f9",
	                        "no-preload-881977"
	                    ],
	                    "NetworkID": "e7538c09064c9e298d1f44de0c17bc2360049aac006e98bc362815afc93902a4",
	                    "EndpointID": "e2d65eb4eb638daf141194c616c3f8b2dd3573204396989aab5fd29ba8bf9a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (3.453541756s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:39:09.500935  941355 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E1109 00:39:09.500954  941355 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-881977" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (974.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:39:29.191159  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:39:46.145487  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:42:08.339166  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:42:25.286416  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:43:00.824237  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:43:28.244208  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:44:46.145466  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:00.824755  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-881977 -n no-preload-881977
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-881977 -n no-preload-881977: exit status 3 (3.487095467s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:48:12.990459  959864 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38926->127.0.0.1:33939: read: connection reset by peer
	E1109 00:48:12.990485  959864 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38926->127.0.0.1:33939: read: connection reset by peer

                                                
                                                
** /stderr **
start_stop_delete_test.go:274: status error: exit status 3 (may be ok)
start_stop_delete_test.go:274: "no-preload-881977" apiserver is not running, skipping kubectl commands (state="Nonexistent")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-881977
helpers_test.go:235: (dbg) docker inspect no-preload-881977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da",
	        "Created": "2023-11-09T00:18:52.867772019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T00:18:53.523413944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hosts",
	        "LogPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da-json.log",
	        "Name": "/no-preload-881977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-881977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-881977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-881977",
	                "Source": "/var/lib/docker/volumes/no-preload-881977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-881977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-881977",
	                "name.minikube.sigs.k8s.io": "no-preload-881977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3350f4a343f5fbbf1a16ea28cd8d9da2fc351f6b6c3d0a1efb567f98ee875ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3350f4a343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-881977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3fdef2a329f9",
	                        "no-preload-881977"
	                    ],
	                    "NetworkID": "e7538c09064c9e298d1f44de0c17bc2360049aac006e98bc362815afc93902a4",
	                    "EndpointID": "e2d65eb4eb638daf141194c616c3f8b2dd3573204396989aab5fd29ba8bf9a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (2.00496449s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:48:15.008749  960070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38966->127.0.0.1:33939: read: connection reset by peer
	E1109 00:48:15.008775  960070 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38966->127.0.0.1:33939: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-881977" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (41.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:28.244332  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:36.283864  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.289213  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.299459  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.319721  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.360055  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.440940  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.601329  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:48:36.922312  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:37.563387  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:38.843632  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:41.404482  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1109 00:48:46.525345  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-881977 -n no-preload-881977
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-881977 -n no-preload-881977: exit status 3 (2.081332988s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:48:53.105092  961171 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E1109 00:48:53.105111  961171 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
start_stop_delete_test.go:287: status error: exit status 3 (may be ok)
start_stop_delete_test.go:287: "no-preload-881977" apiserver is not running, skipping kubectl commands (state="Nonexistent")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-881977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-881977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.125µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-881977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-881977
helpers_test.go:235: (dbg) docker inspect no-preload-881977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da",
	        "Created": "2023-11-09T00:18:52.867772019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T00:18:53.523413944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/hosts",
	        "LogPath": "/var/lib/docker/containers/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da/3fdef2a329f94737f9419f87e0a1c0422b86da5c72d889e0ed998b72710560da-json.log",
	        "Name": "/no-preload-881977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-881977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-881977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6-init/diff:/var/lib/docker/overlay2/a37793fd41a65d2d53e46d1ba8e85f7ca52242d993ce6ed8de0d2d0e3cddac68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4534e95c6369d1c98fbd83a4b39e36ec53256e36eff9dcd467cdf9f8d48bd7b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-881977",
	                "Source": "/var/lib/docker/volumes/no-preload-881977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-881977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-881977",
	                "name.minikube.sigs.k8s.io": "no-preload-881977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3350f4a343f5fbbf1a16ea28cd8d9da2fc351f6b6c3d0a1efb567f98ee875ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3350f4a343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-881977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3fdef2a329f9",
	                        "no-preload-881977"
	                    ],
	                    "NetworkID": "e7538c09064c9e298d1f44de0c17bc2360049aac006e98bc362815afc93902a4",
	                    "EndpointID": "e2d65eb4eb638daf141194c616c3f8b2dd3573204396989aab5fd29ba8bf9a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-881977 -n no-preload-881977: exit status 3 (2.99140453s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:48:56.115156  961197 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E1109 00:48:56.115172  961197 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-881977" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (41.09s)
E1109 00:54:03.967588  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
E1109 00:54:22.831140  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:22.836808  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:22.847152  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:22.867531  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:22.907929  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:22.988233  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:23.148806  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:23.469497  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:24.110459  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:25.391603  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:27.952553  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:33.072855  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:43.313486  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
E1109 00:54:46.145474  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory

                                                
                                    

Test pass (266/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.3/json-events 10.76
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.64
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
25 TestAddons/Setup 142.19
27 TestAddons/parallel/Registry 15.83
29 TestAddons/parallel/InspektorGadget 10.88
30 TestAddons/parallel/MetricsServer 5.9
33 TestAddons/parallel/CSI 44.08
34 TestAddons/parallel/Headlamp 11.62
35 TestAddons/parallel/CloudSpanner 5.64
36 TestAddons/parallel/LocalPath 53.42
37 TestAddons/parallel/NvidiaDevicePlugin 5.61
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/StoppedEnableDisable 12.48
42 TestCertOptions 36.8
43 TestCertExpiration 227.29
45 TestForceSystemdFlag 41.53
46 TestForceSystemdEnv 55.98
47 TestDockerEnvContainerd 48.1
52 TestErrorSpam/setup 33.06
53 TestErrorSpam/start 0.87
54 TestErrorSpam/status 1.12
55 TestErrorSpam/pause 1.85
56 TestErrorSpam/unpause 2.09
57 TestErrorSpam/stop 1.47
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 89.07
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6.24
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 5.01
69 TestFunctional/serial/CacheCmd/cache/add_local 2.13
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.38
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.59
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
75 TestFunctional/serial/MinikubeKubectlCmd 0.17
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.18
77 TestFunctional/serial/ExtraConfig 43.17
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.85
80 TestFunctional/serial/LogsFileCmd 1.9
81 TestFunctional/serial/InvalidService 4.56
83 TestFunctional/parallel/ConfigCmd 0.67
84 TestFunctional/parallel/DashboardCmd 12.85
85 TestFunctional/parallel/DryRun 0.6
86 TestFunctional/parallel/InternationalLanguage 0.36
87 TestFunctional/parallel/StatusCmd 1.36
91 TestFunctional/parallel/ServiceCmdConnect 8.85
92 TestFunctional/parallel/AddonsCmd 0.27
93 TestFunctional/parallel/PersistentVolumeClaim 25.95
95 TestFunctional/parallel/SSHCmd 0.81
96 TestFunctional/parallel/CpCmd 1.76
98 TestFunctional/parallel/FileSync 0.45
99 TestFunctional/parallel/CertSync 2.28
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.88
107 TestFunctional/parallel/License 0.43
108 TestFunctional/parallel/Version/short 0.08
109 TestFunctional/parallel/Version/components 1.26
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.5
115 TestFunctional/parallel/ImageCommands/Setup 2.5
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
120 TestFunctional/parallel/ServiceCmd/DeployApp 9.31
123 TestFunctional/parallel/ServiceCmd/List 0.49
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
126 TestFunctional/parallel/ServiceCmd/Format 0.55
127 TestFunctional/parallel/ServiceCmd/URL 0.54
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.56
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.67
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
144 TestFunctional/parallel/ProfileCmd/profile_list 0.49
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
146 TestFunctional/parallel/MountCmd/any-port 7.62
147 TestFunctional/parallel/MountCmd/specific-port 2.45
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 90.8
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.19
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
162 TestJSONOutput/start/Command 77.24
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.79
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.73
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.89
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.27
187 TestKicCustomNetwork/create_custom_network 46.95
188 TestKicCustomNetwork/use_default_bridge_network 34.18
189 TestKicExistingNetwork 33.98
190 TestKicCustomSubnet 36.72
191 TestKicStaticIP 35.38
192 TestMainNoArgs 0.08
193 TestMinikubeProfile 71.13
196 TestMountStart/serial/StartWithMountFirst 9.32
197 TestMountStart/serial/VerifyMountFirst 0.3
198 TestMountStart/serial/StartWithMountSecond 9.93
199 TestMountStart/serial/VerifyMountSecond 0.29
200 TestMountStart/serial/DeleteFirst 1.7
201 TestMountStart/serial/VerifyMountPostDelete 0.29
202 TestMountStart/serial/Stop 1.25
203 TestMountStart/serial/RestartStopped 7.88
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 81.09
208 TestMultiNode/serial/DeployApp2Nodes 6.78
209 TestMultiNode/serial/PingHostFrom2Pods 1.32
210 TestMultiNode/serial/AddNode 16.7
211 TestMultiNode/serial/ProfileList 0.39
212 TestMultiNode/serial/CopyFile 11.49
213 TestMultiNode/serial/StopNode 2.42
214 TestMultiNode/serial/StartAfterStop 11.96
215 TestMultiNode/serial/RestartKeepsNodes 120
216 TestMultiNode/serial/DeleteNode 5.19
217 TestMultiNode/serial/StopMultiNode 24.3
218 TestMultiNode/serial/RestartMultiNode 80.53
219 TestMultiNode/serial/ValidateNameConflict 41.11
224 TestPreload 169.75
226 TestScheduledStopUnix 107.24
229 TestInsufficientStorage 11.37
230 TestRunningBinaryUpgrade 85.99
232 TestKubernetesUpgrade 385.87
233 TestMissingContainerUpgrade 192.24
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
236 TestNoKubernetes/serial/StartWithK8s 42.49
237 TestNoKubernetes/serial/StartWithStopK8s 17.23
238 TestNoKubernetes/serial/Start 5.75
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
240 TestNoKubernetes/serial/ProfileList 0.92
241 TestNoKubernetes/serial/Stop 1.33
242 TestNoKubernetes/serial/StartNoArgs 7.3
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
244 TestStoppedBinaryUpgrade/Setup 1.77
245 TestStoppedBinaryUpgrade/Upgrade 111.25
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
255 TestPause/serial/Start 92.55
256 TestPause/serial/SecondStartNoReconfiguration 7.35
257 TestPause/serial/Pause 0.9
258 TestPause/serial/VerifyStatus 0.43
259 TestPause/serial/Unpause 1.16
260 TestPause/serial/PauseAgain 1.46
261 TestPause/serial/DeletePaused 3.59
262 TestPause/serial/VerifyDeletedResources 0.83
270 TestNetworkPlugins/group/false 6.49
275 TestStartStop/group/old-k8s-version/serial/FirstStart 127.92
276 TestStartStop/group/old-k8s-version/serial/DeployApp 9.66
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
278 TestStartStop/group/old-k8s-version/serial/Stop 12.32
280 TestStartStop/group/no-preload/serial/FirstStart 86.69
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
282 TestStartStop/group/old-k8s-version/serial/SecondStart 668.83
283 TestStartStop/group/no-preload/serial/DeployApp 8.54
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.32
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.06
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
291 TestStartStop/group/old-k8s-version/serial/Pause 3.41
293 TestStartStop/group/embed-certs/serial/FirstStart 55.58
294 TestStartStop/group/embed-certs/serial/DeployApp 9.52
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
296 TestStartStop/group/embed-certs/serial/Stop 12.09
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
298 TestStartStop/group/embed-certs/serial/SecondStart 334.07
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.03
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
302 TestStartStop/group/embed-certs/serial/Pause 3.45
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.14
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.4
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.03
312 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
314 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.76
316 TestStartStop/group/newest-cni/serial/FirstStart 43.32
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
319 TestStartStop/group/newest-cni/serial/Stop 1.28
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/newest-cni/serial/SecondStart 29.94
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
325 TestStartStop/group/newest-cni/serial/Pause 3.22
326 TestNetworkPlugins/group/auto/Start 59.14
327 TestNetworkPlugins/group/auto/KubeletFlags 0.34
328 TestNetworkPlugins/group/auto/NetCatPod 9.36
329 TestNetworkPlugins/group/auto/DNS 0.21
330 TestNetworkPlugins/group/auto/Localhost 0.19
331 TestNetworkPlugins/group/auto/HairPin 0.19
332 TestNetworkPlugins/group/kindnet/Start 85.67
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
336 TestNetworkPlugins/group/kindnet/NetCatPod 9.39
337 TestNetworkPlugins/group/kindnet/DNS 0.23
338 TestNetworkPlugins/group/kindnet/Localhost 0.18
339 TestNetworkPlugins/group/kindnet/HairPin 0.2
340 TestNetworkPlugins/group/calico/Start 82.63
341 TestNetworkPlugins/group/custom-flannel/Start 67.49
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.55
344 TestNetworkPlugins/group/calico/ControllerPod 5.04
345 TestNetworkPlugins/group/custom-flannel/DNS 0.22
346 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
347 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
348 TestNetworkPlugins/group/calico/KubeletFlags 0.35
349 TestNetworkPlugins/group/calico/NetCatPod 9.49
350 TestNetworkPlugins/group/calico/DNS 0.34
351 TestNetworkPlugins/group/calico/Localhost 0.3
352 TestNetworkPlugins/group/calico/HairPin 0.31
353 TestNetworkPlugins/group/enable-default-cni/Start 52.32
354 TestNetworkPlugins/group/flannel/Start 62.49
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.51
357 TestNetworkPlugins/group/enable-default-cni/DNS 16.53
358 TestNetworkPlugins/group/flannel/ControllerPod 5.03
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
362 TestNetworkPlugins/group/flannel/NetCatPod 11.4
363 TestNetworkPlugins/group/flannel/DNS 0.26
364 TestNetworkPlugins/group/flannel/Localhost 0.3
365 TestNetworkPlugins/group/flannel/HairPin 0.27
366 TestNetworkPlugins/group/bridge/Start 80.42
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
368 TestNetworkPlugins/group/bridge/NetCatPod 9.34
369 TestNetworkPlugins/group/bridge/DNS 0.19
370 TestNetworkPlugins/group/bridge/Localhost 0.17
371 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (14.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-282555 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-282555 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.761786191s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-282555
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-282555: exit status 85 (97.698096ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-282555 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-282555        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:35:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:35:10.967903  754907 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:35:10.968038  754907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:10.968048  754907 out.go:309] Setting ErrFile to fd 2...
	I1108 23:35:10.968054  754907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:10.968304  754907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	W1108 23:35:10.968439  754907 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17586-749551/.minikube/config/config.json: open /home/jenkins/minikube-integration/17586-749551/.minikube/config/config.json: no such file or directory
	I1108 23:35:10.968825  754907 out.go:303] Setting JSON to true
	I1108 23:35:10.969681  754907 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22660,"bootTime":1699463851,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:35:10.969754  754907 start.go:138] virtualization:  
	I1108 23:35:10.972408  754907 out.go:97] [download-only-282555] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 23:35:10.974192  754907 out.go:169] MINIKUBE_LOCATION=17586
	W1108 23:35:10.972677  754907 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 23:35:10.972737  754907 notify.go:220] Checking for updates...
	I1108 23:35:10.975967  754907 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:35:10.977600  754907 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:35:10.979388  754907 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:35:10.980860  754907 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 23:35:10.984224  754907 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 23:35:10.984489  754907 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:35:11.013361  754907 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:35:11.013522  754907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:11.097292  754907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-08 23:35:11.087315518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:11.097405  754907 docker.go:295] overlay module found
	I1108 23:35:11.099734  754907 out.go:97] Using the docker driver based on user configuration
	I1108 23:35:11.099759  754907 start.go:298] selected driver: docker
	I1108 23:35:11.099766  754907 start.go:902] validating driver "docker" against <nil>
	I1108 23:35:11.099911  754907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:11.173980  754907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-08 23:35:11.164306121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:11.174143  754907 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1108 23:35:11.174444  754907 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1108 23:35:11.174603  754907 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 23:35:11.176679  754907 out.go:169] Using Docker driver with root privileges
	I1108 23:35:11.178382  754907 cni.go:84] Creating CNI manager for ""
	I1108 23:35:11.178403  754907 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:35:11.178415  754907 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 23:35:11.178431  754907 start_flags.go:323] config:
	{Name:download-only-282555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-282555 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:11.180260  754907 out.go:97] Starting control plane node download-only-282555 in cluster download-only-282555
	I1108 23:35:11.180286  754907 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1108 23:35:11.181866  754907 out.go:97] Pulling base image ...
	I1108 23:35:11.181902  754907 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1108 23:35:11.182059  754907 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1108 23:35:11.199271  754907 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1108 23:35:11.199873  754907 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1108 23:35:11.199993  754907 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1108 23:35:11.259979  754907 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1108 23:35:11.260012  754907 cache.go:56] Caching tarball of preloaded images
	I1108 23:35:11.260535  754907 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1108 23:35:11.262540  754907 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1108 23:35:11.262564  754907 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:35:11.411855  754907 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1108 23:35:18.054339  754907 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-282555"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (10.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-282555 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-282555 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.763573795s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (10.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-282555
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-282555: exit status 85 (89.230731ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-282555 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-282555        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-282555 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-282555        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:35:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:35:25.836064  754980 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:35:25.836241  754980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:25.836252  754980 out.go:309] Setting ErrFile to fd 2...
	I1108 23:35:25.836258  754980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:25.836528  754980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	W1108 23:35:25.836681  754980 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17586-749551/.minikube/config/config.json: open /home/jenkins/minikube-integration/17586-749551/.minikube/config/config.json: no such file or directory
	I1108 23:35:25.836913  754980 out.go:303] Setting JSON to true
	I1108 23:35:25.837781  754980 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22675,"bootTime":1699463851,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:35:25.837859  754980 start.go:138] virtualization:  
	I1108 23:35:25.840360  754980 out.go:97] [download-only-282555] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 23:35:25.843068  754980 out.go:169] MINIKUBE_LOCATION=17586
	I1108 23:35:25.840691  754980 notify.go:220] Checking for updates...
	I1108 23:35:25.847029  754980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:35:25.849222  754980 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:35:25.851294  754980 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:35:25.853445  754980 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 23:35:25.857321  754980 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 23:35:25.857876  754980 config.go:182] Loaded profile config "download-only-282555": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1108 23:35:25.857948  754980 start.go:810] api.Load failed for download-only-282555: filestore "download-only-282555": Docker machine "download-only-282555" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1108 23:35:25.858069  754980 driver.go:378] Setting default libvirt URI to qemu:///system
	W1108 23:35:25.858097  754980 start.go:810] api.Load failed for download-only-282555: filestore "download-only-282555": Docker machine "download-only-282555" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1108 23:35:25.882004  754980 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:35:25.882089  754980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:25.962802  754980 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-08 23:35:25.951796078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:25.962921  754980 docker.go:295] overlay module found
	I1108 23:35:25.965016  754980 out.go:97] Using the docker driver based on existing profile
	I1108 23:35:25.965052  754980 start.go:298] selected driver: docker
	I1108 23:35:25.965063  754980 start.go:902] validating driver "docker" against &{Name:download-only-282555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-282555 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:25.965266  754980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:35:26.035055  754980 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-08 23:35:26.024522506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:35:26.035502  754980 cni.go:84] Creating CNI manager for ""
	I1108 23:35:26.035524  754980 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1108 23:35:26.035538  754980 start_flags.go:323] config:
	{Name:download-only-282555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-282555 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:26.037656  754980 out.go:97] Starting control plane node download-only-282555 in cluster download-only-282555
	I1108 23:35:26.037706  754980 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1108 23:35:26.039475  754980 out.go:97] Pulling base image ...
	I1108 23:35:26.039502  754980 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:26.039701  754980 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1108 23:35:26.059239  754980 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1108 23:35:26.059390  754980 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1108 23:35:26.059415  754980 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1108 23:35:26.059423  754980 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1108 23:35:26.059433  754980 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1108 23:35:26.126244  754980 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1108 23:35:26.126282  754980 cache.go:56] Caching tarball of preloaded images
	I1108 23:35:26.126509  754980 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:26.128650  754980 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1108 23:35:26.128683  754980 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:35:26.279649  754980 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:bef3312f8cc1e9e2e6a78bd8b3d269c4 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1108 23:35:34.221593  754980 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:35:34.221706  754980 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17586-749551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1108 23:35:35.129983  754980 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:35:35.130121  754980 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/download-only-282555/config.json ...
	I1108 23:35:35.130352  754980 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:35.130563  754980 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17586-749551/.minikube/cache/linux/arm64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-282555"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-282555
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-304793 --alsologtostderr --binary-mirror http://127.0.0.1:39905 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-304793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-304793
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-118967
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-118967: exit status 85 (98.46292ms)

                                                
                                                
-- stdout --
	* Profile "addons-118967" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118967"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-118967
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-118967: exit status 85 (114.09566ms)

                                                
                                                
-- stdout --
	* Profile "addons-118967" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118967"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (142.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-118967 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-118967 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.192001702s)
--- PASS: TestAddons/Setup (142.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 42.453282ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fnqg8" [b764ff1f-e51d-4072-9a1e-bf302ffc887e] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.027404348s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-znflj" [7a21a579-a336-4282-b18c-285134164e96] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014848745s
addons_test.go:339: (dbg) Run:  kubectl --context addons-118967 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-118967 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-118967 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.221572642s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 ip
2023/11/08 23:38:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 addons disable registry --alsologtostderr -v=1: (1.113877397s)
--- PASS: TestAddons/parallel/Registry (15.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n6t2r" [2f8fb919-9bf0-4b6f-82fa-904289136350] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012852001s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-118967
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-118967: (5.86997068s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.944878ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-c9kmx" [b5d47038-e0ba-4b1c-bdff-2a99a0a81148] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013377073s
addons_test.go:414: (dbg) Run:  kubectl --context addons-118967 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 46.449293ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-118967 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-118967 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3c178d54-b230-4c3a-8f5b-fc03e9993b21] Pending
helpers_test.go:344: "task-pv-pod" [3c178d54-b230-4c3a-8f5b-fc03e9993b21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3c178d54-b230-4c3a-8f5b-fc03e9993b21] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.018891613s
addons_test.go:583: (dbg) Run:  kubectl --context addons-118967 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118967 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118967 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-118967 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-118967 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-118967 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-118967 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d1aa2527-d5a8-425a-a4c9-1220d1af9420] Pending
helpers_test.go:344: "task-pv-pod-restore" [d1aa2527-d5a8-425a-a4c9-1220d1af9420] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d1aa2527-d5a8-425a-a4c9-1220d1af9420] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.024719994s
addons_test.go:625: (dbg) Run:  kubectl --context addons-118967 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-118967 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-118967 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.896982982s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-118967 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-118967 --alsologtostderr -v=1: (1.589012123s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-vnjrn" [ff065270-c323-4685-bc55-aa7d31f38c29] Pending
helpers_test.go:344: "headlamp-777fd4b855-vnjrn" [ff065270-c323-4685-bc55-aa7d31f38c29] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-vnjrn" [ff065270-c323-4685-bc55-aa7d31f38c29] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.031638719s
--- PASS: TestAddons/parallel/Headlamp (11.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-tpzgz" [3b84ffad-ed8e-436a-a7bf-cb98572a5b28] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014407092s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-118967
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-118967 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-118967 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [16781e0d-f629-476e-b06e-1a0172caf9a5] Pending
helpers_test.go:344: "test-local-path" [16781e0d-f629-476e-b06e-1a0172caf9a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [16781e0d-f629-476e-b06e-1a0172caf9a5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [16781e0d-f629-476e-b06e-1a0172caf9a5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010122844s
addons_test.go:890: (dbg) Run:  kubectl --context addons-118967 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 ssh "cat /opt/local-path-provisioner/pvc-cbcd01e2-78c3-4735-bcd6-14fcf27e46a7_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-118967 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-118967 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-118967 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-118967 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.667936555s)
--- PASS: TestAddons/parallel/LocalPath (53.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-46zpk" [87c2969a-8026-42fe-86c6-e669b75ebd9f] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0170235s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-118967
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-118967 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-118967 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-118967
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-118967: (12.124800526s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-118967
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-118967
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-118967
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (36.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-566596 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1109 00:16:03.874776  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-566596 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.937738501s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-566596 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-566596 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-566596 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-566596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-566596
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-566596: (2.104150945s)
--- PASS: TestCertOptions (36.80s)

                                                
                                    
x
+
TestCertExpiration (227.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-132057 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-132057 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.083537833s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-132057 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-132057 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.721893681s)
helpers_test.go:175: Cleaning up "cert-expiration-132057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-132057
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-132057: (2.48687978s)
--- PASS: TestCertExpiration (227.29s)

                                                
                                    
x
+
TestForceSystemdFlag (41.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-026969 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-026969 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.498370199s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-026969 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-026969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-026969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-026969: (2.505819509s)
--- PASS: TestForceSystemdFlag (41.53s)

                                                
                                    
x
+
TestForceSystemdEnv (55.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-100179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-100179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (52.995608052s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-100179 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-100179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-100179
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-100179: (2.364364315s)
--- PASS: TestForceSystemdEnv (55.98s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.1s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-254666 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-254666 --driver=docker  --container-runtime=containerd: (31.759280511s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-254666"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-254666": (1.356252859s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Xh12S3jMmuz/agent.771867" SSH_AGENT_PID="771868" DOCKER_HOST=ssh://docker@127.0.0.1:33707 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Xh12S3jMmuz/agent.771867" SSH_AGENT_PID="771868" DOCKER_HOST=ssh://docker@127.0.0.1:33707 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Xh12S3jMmuz/agent.771867" SSH_AGENT_PID="771868" DOCKER_HOST=ssh://docker@127.0.0.1:33707 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.779234135s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Xh12S3jMmuz/agent.771867" SSH_AGENT_PID="771868" DOCKER_HOST=ssh://docker@127.0.0.1:33707 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-254666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-254666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-254666: (2.038703235s)
--- PASS: TestDockerEnvContainerd (48.10s)

                                                
                                    
x
+
TestErrorSpam/setup (33.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-927030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-927030 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-927030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-927030 --driver=docker  --container-runtime=containerd: (33.064528004s)
--- PASS: TestErrorSpam/setup (33.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 unpause
--- PASS: TestErrorSpam/unpause (2.09s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 stop: (1.23834987s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-927030 --log_dir /tmp/nospam-927030 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17586-749551/.minikube/files/etc/test/nested/copy/754902/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1108 23:43:00.825253  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:00.831453  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:00.841699  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:00.861952  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:00.902511  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:00.982830  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:01.143228  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:01.463844  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:02.104535  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:03.384753  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:05.945554  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:11.065753  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1108 23:43:21.306737  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-471648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m29.074724829s)
--- PASS: TestFunctional/serial/StartWithProxy (89.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-471648 --alsologtostderr -v=8: (6.234954625s)
functional_test.go:659: soft start took 6.235500649s for "functional-471648" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-471648 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:3.1: (1.745862086s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:3.3
E1108 23:43:41.786924  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:3.3: (1.739444078s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 cache add registry.k8s.io/pause:latest: (1.527965696s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-471648 /tmp/TestFunctionalserialCacheCmdcacheadd_local2570796035/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache add minikube-local-cache-test:functional-471648
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 cache add minikube-local-cache-test:functional-471648: (1.602617924s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache delete minikube-local-cache-test:functional-471648
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-471648
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (335.785045ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 cache reload: (1.538012849s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 kubectl -- --context functional-471648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-471648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 23:44:22.747098  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-471648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.165301181s)
functional_test.go:757: restart took 43.165398871s for "functional-471648" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-471648 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 logs: (1.84975202s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 logs --file /tmp/TestFunctionalserialLogsFileCmd3373126092/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 logs --file /tmp/TestFunctionalserialLogsFileCmd3373126092/001/logs.txt: (1.899687785s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-471648 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-471648
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-471648: exit status 115 (690.253137ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31677 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-471648 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 config get cpus: exit status 14 (118.221642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 config get cpus: exit status 14 (120.851319ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-471648 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-471648 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 786724: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-471648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (223.954331ms)

                                                
                                                
-- stdout --
	* [functional-471648] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:45:27.762165  786144 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:27.762390  786144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:27.762403  786144 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:27.762410  786144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:27.762681  786144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:45:27.763053  786144 out.go:303] Setting JSON to false
	I1108 23:45:27.764291  786144 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23277,"bootTime":1699463851,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:45:27.764369  786144 start.go:138] virtualization:  
	I1108 23:45:27.767975  786144 out.go:177] * [functional-471648] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 23:45:27.769699  786144 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:45:27.771213  786144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:45:27.769910  786144 notify.go:220] Checking for updates...
	I1108 23:45:27.776170  786144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:45:27.778082  786144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:45:27.779651  786144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 23:45:27.781481  786144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:45:27.783499  786144 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:27.784115  786144 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:45:27.810092  786144 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:45:27.810208  786144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:45:27.903874  786144 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-08 23:45:27.892883584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:45:27.903995  786144 docker.go:295] overlay module found
	I1108 23:45:27.906034  786144 out.go:177] * Using the docker driver based on existing profile
	I1108 23:45:27.907669  786144 start.go:298] selected driver: docker
	I1108 23:45:27.907689  786144 start.go:902] validating driver "docker" against &{Name:functional-471648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-471648 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:45:27.907782  786144 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:45:27.910288  786144 out.go:177] 
	W1108 23:45:27.911912  786144 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 23:45:27.913650  786144 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-471648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (364.107313ms)

                                                
                                                
-- stdout --
	* [functional-471648] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:45:28.428798  786256 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:28.428936  786256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:28.428942  786256 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:28.428947  786256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:28.429305  786256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:45:28.429850  786256 out.go:303] Setting JSON to false
	I1108 23:45:28.430949  786256 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23278,"bootTime":1699463851,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 23:45:28.431025  786256 start.go:138] virtualization:  
	I1108 23:45:28.434583  786256 out.go:177] * [functional-471648] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1108 23:45:28.436975  786256 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:45:28.439043  786256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:45:28.437060  786256 notify.go:220] Checking for updates...
	I1108 23:45:28.443650  786256 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1108 23:45:28.446487  786256 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1108 23:45:28.448506  786256 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 23:45:28.450432  786256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:45:28.452896  786256 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:28.453461  786256 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:45:28.520440  786256 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 23:45:28.520534  786256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:45:28.648055  786256 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-08 23:45:28.636976303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:45:28.648156  786256 docker.go:295] overlay module found
	I1108 23:45:28.650169  786256 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1108 23:45:28.652040  786256 start.go:298] selected driver: docker
	I1108 23:45:28.652059  786256 start.go:902] validating driver "docker" against &{Name:functional-471648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-471648 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:45:28.652180  786256 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:45:28.654794  786256 out.go:177] 
	W1108 23:45:28.656620  786256 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 23:45:28.658234  786256 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-471648 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-471648 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nwdx7" [f4176dad-19a1-4642-bb1a-de222462c8d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nwdx7" [f4176dad-19a1-4642-bb1a-de222462c8d9] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.018996602s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30295
functional_test.go:1674: http://192.168.49.2:30295: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-nwdx7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30295
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e0e88938-2df9-4957-bd7f-af11acf98eef] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.043973839s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-471648 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-471648 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-471648 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [619a8d70-e8a5-4139-98a1-6c27f2427595] Pending
helpers_test.go:344: "sp-pod" [619a8d70-e8a5-4139-98a1-6c27f2427595] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [619a8d70-e8a5-4139-98a1-6c27f2427595] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.018035036s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-471648 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-471648 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-471648 delete -f testdata/storage-provisioner/pod.yaml: (1.542483271s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4ba07e0-d28a-4809-bb3d-efefa8be8338] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a4ba07e0-d28a-4809-bb3d-efefa8be8338] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.014748379s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-471648 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh -n functional-471648 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 cp functional-471648:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1194624626/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh -n functional-471648 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/754902/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /etc/test/nested/copy/754902/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/754902.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /etc/ssl/certs/754902.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/754902.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /usr/share/ca-certificates/754902.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7549022.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /etc/ssl/certs/7549022.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7549022.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /usr/share/ca-certificates/7549022.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-471648 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh "sudo systemctl is-active docker": exit status 1 (454.040966ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh "sudo systemctl is-active crio": exit status 1 (420.793243ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 version -o=json --components: (1.255743375s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471648 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-471648
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471648 image ls --format short --alsologtostderr:
I1108 23:45:36.191420  787574 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:36.191670  787574 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:36.191748  787574 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:36.191764  787574 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:36.192038  787574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
I1108 23:45:36.192702  787574 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:36.192846  787574 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:36.193330  787574 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
I1108 23:45:36.219294  787574 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:36.219368  787574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
I1108 23:45:36.238703  787574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
I1108 23:45:36.335686  787574 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471648 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-471648  | sha256:fcd915 | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:aae348 | 19.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| localhost/my-image                          | functional-471648  | sha256:76ec83 | 831kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537e9a | 31.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:827643 | 30.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:42a4e7 | 17.1MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | latest             | sha256:81be38 | 67.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:a5dd5c | 22MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471648 image ls --format table --alsologtostderr:
I1108 23:45:40.564972  787920 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:40.565201  787920 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:40.565214  787920 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:40.565220  787920 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:40.565548  787920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
I1108 23:45:40.566226  787920 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:40.566408  787920 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:40.567008  787920 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
I1108 23:45:40.586550  787920 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:40.586606  787920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
I1108 23:45:40.605419  787920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
I1108 23:45:40.695158  787920 ssh_runner.go:195] Run: sudo crictl images --output json
2023/11/08 23:45:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471648 image ls --format json --alsologtostderr:
[{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241456"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["reg
istry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"30344361"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19561536"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1
dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:76ec83efc0a2f88ce23c7d1b1a242d801af017184da93c988c8f1bbb4f455115","repoDigests":[],"repoTags":["localhost/my-image:functional-471648"],"size":"830633"},{"id":"sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"21981421"},{"id":"sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"17063462"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0
f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:fcd9150b6d8df92d3ba1756bf47f75b21136817038e21b6e9cae919229e58c59","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-471648"],"size":"1007"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io
/kube-apiserver:v1.28.3"],"size":"31557550"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471648 image ls --format json --alsologtostderr:
I1108 23:45:40.306539  787893 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:40.306852  787893 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:40.306864  787893 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:40.306890  787893 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:40.307273  787893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
I1108 23:45:40.308477  787893 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:40.308644  787893 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:40.309359  787893 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
I1108 23:45:40.328302  787893 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:40.328357  787893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
I1108 23:45:40.347654  787893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
I1108 23:45:40.439290  787893 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471648 image ls --format yaml --alsologtostderr:
- id: sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "21981421"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "30344361"
- id: sha256:fcd9150b6d8df92d3ba1756bf47f75b21136817038e21b6e9cae919229e58c59
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-471648
size: "1007"
- id: sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "17063462"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "19561536"
- id: sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
repoTags:
- docker.io/library/nginx:latest
size: "67241456"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "31557550"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471648 image ls --format yaml --alsologtostderr:
I1108 23:45:36.516390  787602 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:36.516593  787602 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:36.516599  787602 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:36.516605  787602 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:36.516917  787602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
I1108 23:45:36.517685  787602 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:36.517860  787602 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:36.518375  787602 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
I1108 23:45:36.537691  787602 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:36.537744  787602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
I1108 23:45:36.559755  787602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
I1108 23:45:36.655570  787602 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh pgrep buildkitd: exit status 1 (423.731273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image build -t localhost/my-image:functional-471648 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 image build -t localhost/my-image:functional-471648 testdata/build --alsologtostderr: (2.787174138s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471648 image build -t localhost/my-image:functional-471648 testdata/build --alsologtostderr:
I1108 23:45:37.236464  787680 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:37.237305  787680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:37.237349  787680 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:37.237371  787680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:37.237687  787680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
I1108 23:45:37.238450  787680 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:37.239174  787680 config.go:182] Loaded profile config "functional-471648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:37.239779  787680 cli_runner.go:164] Run: docker container inspect functional-471648 --format={{.State.Status}}
I1108 23:45:37.258595  787680 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:37.258643  787680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471648
I1108 23:45:37.278148  787680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/functional-471648/id_rsa Username:docker}
I1108 23:45:37.379611  787680 build_images.go:151] Building image from path: /tmp/build.3355416813.tar
I1108 23:45:37.379679  787680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 23:45:37.390832  787680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3355416813.tar
I1108 23:45:37.396435  787680 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3355416813.tar: stat -c "%s %y" /var/lib/minikube/build/build.3355416813.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3355416813.tar': No such file or directory
I1108 23:45:37.396463  787680 ssh_runner.go:362] scp /tmp/build.3355416813.tar --> /var/lib/minikube/build/build.3355416813.tar (3072 bytes)
I1108 23:45:37.427730  787680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3355416813
I1108 23:45:37.438826  787680 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3355416813 -xf /var/lib/minikube/build/build.3355416813.tar
I1108 23:45:37.450190  787680 containerd.go:378] Building image: /var/lib/minikube/build/build.3355416813
I1108 23:45:37.450347  787680 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3355416813 --local dockerfile=/var/lib/minikube/build/build.3355416813 --output type=image,name=localhost/my-image:functional-471648
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:88ef68825b0ab02bc85ebdd9c3169bc18102a37f511049511bc0c45de7facd7b 0.0s done
#8 exporting config sha256:76ec83efc0a2f88ce23c7d1b1a242d801af017184da93c988c8f1bbb4f455115
#8 exporting config sha256:76ec83efc0a2f88ce23c7d1b1a242d801af017184da93c988c8f1bbb4f455115 0.0s done
#8 naming to localhost/my-image:functional-471648 done
#8 DONE 0.1s
I1108 23:45:39.909352  787680 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3355416813 --local dockerfile=/var/lib/minikube/build/build.3355416813 --output type=image,name=localhost/my-image:functional-471648: (2.458962142s)
I1108 23:45:39.909424  787680 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3355416813
I1108 23:45:39.920723  787680 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3355416813.tar
I1108 23:45:39.931591  787680 build_images.go:207] Built localhost/my-image:functional-471648 from /tmp/build.3355416813.tar
I1108 23:45:39.931645  787680 build_images.go:123] succeeded building to: functional-471648
I1108 23:45:39.931651  787680 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.452521066s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-471648
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-471648 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-471648 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-b5n46" [d2def0c2-e8d6-40ec-89cc-a8184ed9ce51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-b5n46" [d2def0c2-e8d6-40ec-89cc-a8184ed9ce51] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.027537197s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service list -o json
functional_test.go:1493: Took "438.453551ms" to run "out/minikube-linux-arm64 -p functional-471648 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31877
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31877
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 784272: os: process already finished
helpers_test.go:502: unable to terminate pid 784136: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-471648 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fdc809fa-e4d1-44cf-a7e5-c15083314b20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fdc809fa-e4d1-44cf-a7e5-c15083314b20] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.017702892s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image rm gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-471648
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 image save --daemon gcr.io/google-containers/addon-resizer:functional-471648 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-471648
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-471648 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.129.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-471648 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "401.343057ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "84.351409ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "356.241062ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "88.455066ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdany-port545482679/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699487122025099109" to /tmp/TestFunctionalparallelMountCmdany-port545482679/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699487122025099109" to /tmp/TestFunctionalparallelMountCmdany-port545482679/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699487122025099109" to /tmp/TestFunctionalparallelMountCmdany-port545482679/001/test-1699487122025099109
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (453.245797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 test-1699487122025099109
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh cat /mount-9p/test-1699487122025099109
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-471648 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b4100334-7740-4fbb-9888-1c4aadef8422] Pending
helpers_test.go:344: "busybox-mount" [b4100334-7740-4fbb-9888-1c4aadef8422] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b4100334-7740-4fbb-9888-1c4aadef8422] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b4100334-7740-4fbb-9888-1c4aadef8422] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.024518747s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-471648 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdany-port545482679/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdspecific-port3161342009/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (720.051665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdspecific-port3161342009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471648 ssh "sudo umount -f /mount-9p": exit status 1 (357.872417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-471648 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdspecific-port3161342009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T" /mount1: (1.046307828s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471648 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-471648 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081031784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-471648
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-471648
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-471648
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (90.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-316909 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1108 23:45:44.667700  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-316909 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m30.799199067s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (90.80s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons enable ingress --alsologtostderr -v=5: (9.194016453s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-316909 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-729791 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1108 23:48:28.509560  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-729791 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m17.236616902s)
--- PASS: TestJSONOutput/start/Command (77.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-729791 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-729791 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-729791 --output=json --user=testUser
E1108 23:49:46.145227  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.150515  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.160809  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.181136  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.221504  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.301890  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.462376  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:46.783027  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:47.424049  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:49:48.704329  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-729791 --output=json --user=testUser: (5.887116909s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-137252 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-137252 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.793536ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b29f8899-b5af-4124-841c-155afc580712","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-137252] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81794073-82a9-4016-865c-1ef95775d192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17586"}}
	{"specversion":"1.0","id":"0bfd8eed-e125-41eb-88e6-97715c2fa543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"169279b3-1ca3-43f2-82ca-78ea9127200c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig"}}
	{"specversion":"1.0","id":"decf7a38-768b-4165-a797-583158826827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube"}}
	{"specversion":"1.0","id":"7e54c3ab-4906-489a-8275-bc99b5e9a67a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4b387c9a-fb87-45cc-8be4-0b54a79bcf7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fd92621-1044-47e7-90c5-f1da849d4d4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-137252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-137252
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-475628 --network=
E1108 23:49:56.385775  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:50:06.626913  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:50:27.107138  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-475628 --network=: (44.653674307s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-475628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-475628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-475628: (2.268857507s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-451401 --network=bridge
E1108 23:51:08.067756  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-451401 --network=bridge: (32.101000597s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-451401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-451401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-451401: (2.057821018s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.18s)

                                                
                                    
x
+
TestKicExistingNetwork (33.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-303359 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-303359 --network=existing-network: (31.756416292s)
helpers_test.go:175: Cleaning up "existing-network-303359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-303359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-303359: (2.052067223s)
--- PASS: TestKicExistingNetwork (33.98s)

                                                
                                    
x
+
TestKicCustomSubnet (36.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-857735 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-857735 --subnet=192.168.60.0/24: (34.510566545s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-857735 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-857735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-857735
E1108 23:52:25.286415  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.298638  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.309192  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.330038  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.370372  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.450648  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.611078  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:25.931817  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:26.572642  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-857735: (2.184341198s)
--- PASS: TestKicCustomSubnet (36.72s)

                                                
                                    
x
+
TestKicStaticIP (35.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-455758 --static-ip=192.168.200.200
E1108 23:52:27.852883  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:29.987976  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1108 23:52:30.413129  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:35.533656  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:52:45.774033  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-455758 --static-ip=192.168.200.200: (32.913010371s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-455758 ip
helpers_test.go:175: Cleaning up "static-ip-455758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-455758
E1108 23:53:00.824437  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-455758: (2.285139952s)
--- PASS: TestKicStaticIP (35.38s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (71.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-227986 --driver=docker  --container-runtime=containerd
E1108 23:53:06.254769  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-227986 --driver=docker  --container-runtime=containerd: (30.614356859s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-230538 --driver=docker  --container-runtime=containerd
E1108 23:53:47.215376  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-230538 --driver=docker  --container-runtime=containerd: (34.897849369s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-227986
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-230538
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-230538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-230538
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-230538: (2.021019106s)
helpers_test.go:175: Cleaning up "first-227986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-227986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-227986: (2.25341643s)
--- PASS: TestMinikubeProfile (71.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-358972 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-358972 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.31560641s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-358972 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-360982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-360982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.933158536s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-360982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-358972 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-358972 --alsologtostderr -v=5: (1.704640768s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-360982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-360982
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-360982: (1.245338809s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-360982
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-360982: (6.877633188s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-360982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-169917 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1108 23:55:09.135884  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:55:13.829560  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-169917 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.531389292s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-169917 -- rollout status deployment/busybox: (4.403812731s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-dkh5q -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-gmv8t -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-dkh5q -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-gmv8t -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-dkh5q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-gmv8t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-dkh5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-dkh5q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-gmv8t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-169917 -- exec busybox-5bc68d56bd-gmv8t -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.32s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-169917 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-169917 -v 3 --alsologtostderr: (15.958212311s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.70s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp testdata/cp-test.txt multinode-169917:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile299089249/001/cp-test_multinode-169917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917:/home/docker/cp-test.txt multinode-169917-m02:/home/docker/cp-test_multinode-169917_multinode-169917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test_multinode-169917_multinode-169917-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917:/home/docker/cp-test.txt multinode-169917-m03:/home/docker/cp-test_multinode-169917_multinode-169917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test_multinode-169917_multinode-169917-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp testdata/cp-test.txt multinode-169917-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile299089249/001/cp-test_multinode-169917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m02:/home/docker/cp-test.txt multinode-169917:/home/docker/cp-test_multinode-169917-m02_multinode-169917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test_multinode-169917-m02_multinode-169917.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m02:/home/docker/cp-test.txt multinode-169917-m03:/home/docker/cp-test_multinode-169917-m02_multinode-169917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test_multinode-169917-m02_multinode-169917-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp testdata/cp-test.txt multinode-169917-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile299089249/001/cp-test_multinode-169917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m03:/home/docker/cp-test.txt multinode-169917:/home/docker/cp-test_multinode-169917-m03_multinode-169917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917 "sudo cat /home/docker/cp-test_multinode-169917-m03_multinode-169917.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 cp multinode-169917-m03:/home/docker/cp-test.txt multinode-169917-m02:/home/docker/cp-test_multinode-169917-m03_multinode-169917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 ssh -n multinode-169917-m02 "sudo cat /home/docker/cp-test_multinode-169917-m03_multinode-169917-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-169917 node stop m03: (1.276929623s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-169917 status: exit status 7 (556.808347ms)

                                                
                                                
-- stdout --
	multinode-169917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-169917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-169917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr: exit status 7 (584.170159ms)

                                                
                                                
-- stdout --
	multinode-169917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-169917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-169917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:56:46.025621  835153 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:56:46.025834  835153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:56:46.025863  835153 out.go:309] Setting ErrFile to fd 2...
	I1108 23:56:46.025883  835153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:56:46.026170  835153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:56:46.026935  835153 out.go:303] Setting JSON to false
	I1108 23:56:46.027014  835153 mustload.go:65] Loading cluster: multinode-169917
	I1108 23:56:46.027054  835153 notify.go:220] Checking for updates...
	I1108 23:56:46.027535  835153 config.go:182] Loaded profile config "multinode-169917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:56:46.027572  835153 status.go:255] checking status of multinode-169917 ...
	I1108 23:56:46.028113  835153 cli_runner.go:164] Run: docker container inspect multinode-169917 --format={{.State.Status}}
	I1108 23:56:46.048808  835153 status.go:330] multinode-169917 host status = "Running" (err=<nil>)
	I1108 23:56:46.048867  835153 host.go:66] Checking if "multinode-169917" exists ...
	I1108 23:56:46.049212  835153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-169917
	I1108 23:56:46.072829  835153 host.go:66] Checking if "multinode-169917" exists ...
	I1108 23:56:46.073137  835153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:56:46.073294  835153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-169917
	I1108 23:56:46.098993  835153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/multinode-169917/id_rsa Username:docker}
	I1108 23:56:46.192508  835153 ssh_runner.go:195] Run: systemctl --version
	I1108 23:56:46.198554  835153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:56:46.212595  835153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 23:56:46.290028  835153 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-08 23:56:46.279520557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 23:56:46.290616  835153 kubeconfig.go:92] found "multinode-169917" server: "https://192.168.58.2:8443"
	I1108 23:56:46.290644  835153 api_server.go:166] Checking apiserver status ...
	I1108 23:56:46.290692  835153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:56:46.304540  835153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1319/cgroup
	I1108 23:56:46.316281  835153 api_server.go:182] apiserver freezer: "2:freezer:/docker/51d39fb0f3c85fb654bd8def095c9514b1a599539672d48a8a4cadd96adcb410/kubepods/burstable/pod236a2b8fb91c8659953e7a1a8e283054/4836d87547afe5ee04e9bf294af95090991f6b55b86d2c2abcb3a88724feb5cd"
	I1108 23:56:46.316361  835153 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/51d39fb0f3c85fb654bd8def095c9514b1a599539672d48a8a4cadd96adcb410/kubepods/burstable/pod236a2b8fb91c8659953e7a1a8e283054/4836d87547afe5ee04e9bf294af95090991f6b55b86d2c2abcb3a88724feb5cd/freezer.state
	I1108 23:56:46.327107  835153 api_server.go:204] freezer state: "THAWED"
	I1108 23:56:46.327136  835153 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1108 23:56:46.337186  835153 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1108 23:56:46.337220  835153 status.go:421] multinode-169917 apiserver status = Running (err=<nil>)
	I1108 23:56:46.337234  835153 status.go:257] multinode-169917 status: &{Name:multinode-169917 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 23:56:46.337251  835153 status.go:255] checking status of multinode-169917-m02 ...
	I1108 23:56:46.337615  835153 cli_runner.go:164] Run: docker container inspect multinode-169917-m02 --format={{.State.Status}}
	I1108 23:56:46.355768  835153 status.go:330] multinode-169917-m02 host status = "Running" (err=<nil>)
	I1108 23:56:46.355798  835153 host.go:66] Checking if "multinode-169917-m02" exists ...
	I1108 23:56:46.356116  835153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-169917-m02
	I1108 23:56:46.375359  835153 host.go:66] Checking if "multinode-169917-m02" exists ...
	I1108 23:56:46.375682  835153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:56:46.375739  835153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-169917-m02
	I1108 23:56:46.395286  835153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/17586-749551/.minikube/machines/multinode-169917-m02/id_rsa Username:docker}
	I1108 23:56:46.493778  835153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:56:46.508343  835153 status.go:257] multinode-169917-m02 status: &{Name:multinode-169917-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 23:56:46.508381  835153 status.go:255] checking status of multinode-169917-m03 ...
	I1108 23:56:46.508695  835153 cli_runner.go:164] Run: docker container inspect multinode-169917-m03 --format={{.State.Status}}
	I1108 23:56:46.528234  835153 status.go:330] multinode-169917-m03 host status = "Stopped" (err=<nil>)
	I1108 23:56:46.528258  835153 status.go:343] host is not running, skipping remaining checks
	I1108 23:56:46.528266  835153 status.go:257] multinode-169917-m03 status: &{Name:multinode-169917-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-169917 node start m03 --alsologtostderr: (11.084976385s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-169917
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-169917
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-169917: (25.413949296s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-169917 --wait=true -v=8 --alsologtostderr
E1108 23:57:25.286269  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:57:52.976837  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1108 23:58:00.824691  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-169917 --wait=true -v=8 --alsologtostderr: (1m34.402125607s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-169917
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-169917 node delete m03: (4.425438217s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 stop
E1108 23:59:23.872540  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-169917 stop: (24.063673541s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-169917 status: exit status 7 (111.437044ms)

                                                
                                                
-- stdout --
	multinode-169917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-169917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr: exit status 7 (123.215643ms)

                                                
                                                
-- stdout --
	multinode-169917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-169917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:59:27.931822  843793 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:59:27.932052  843793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:59:27.932080  843793 out.go:309] Setting ErrFile to fd 2...
	I1108 23:59:27.932100  843793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:59:27.932404  843793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1108 23:59:27.932639  843793 out.go:303] Setting JSON to false
	I1108 23:59:27.932722  843793 mustload.go:65] Loading cluster: multinode-169917
	I1108 23:59:27.932806  843793 notify.go:220] Checking for updates...
	I1108 23:59:27.934077  843793 config.go:182] Loaded profile config "multinode-169917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:59:27.934141  843793 status.go:255] checking status of multinode-169917 ...
	I1108 23:59:27.934753  843793 cli_runner.go:164] Run: docker container inspect multinode-169917 --format={{.State.Status}}
	I1108 23:59:27.954973  843793 status.go:330] multinode-169917 host status = "Stopped" (err=<nil>)
	I1108 23:59:27.954994  843793 status.go:343] host is not running, skipping remaining checks
	I1108 23:59:27.955001  843793 status.go:257] multinode-169917 status: &{Name:multinode-169917 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 23:59:27.955039  843793 status.go:255] checking status of multinode-169917-m02 ...
	I1108 23:59:27.955366  843793 cli_runner.go:164] Run: docker container inspect multinode-169917-m02 --format={{.State.Status}}
	I1108 23:59:27.981134  843793 status.go:330] multinode-169917-m02 host status = "Stopped" (err=<nil>)
	I1108 23:59:27.981157  843793 status.go:343] host is not running, skipping remaining checks
	I1108 23:59:27.981164  843793 status.go:257] multinode-169917-m02 status: &{Name:multinode-169917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-169917 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1108 23:59:46.145905  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-169917 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.640932907s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-169917 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-169917
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-169917-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-169917-m02 --driver=docker  --container-runtime=containerd: exit status 14 (103.268894ms)

                                                
                                                
-- stdout --
	* [multinode-169917-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-169917-m02' is duplicated with machine name 'multinode-169917-m02' in profile 'multinode-169917'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-169917-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-169917-m03 --driver=docker  --container-runtime=containerd: (38.528090459s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-169917
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-169917: exit status 80 (351.4747ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-169917
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-169917-m03 already exists in multinode-169917-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-169917-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-169917-m03: (2.052809318s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.11s)

                                                
                                    
x
+
TestPreload (169.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-774272 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1109 00:02:25.286718  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-774272 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.277304004s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-774272 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-774272 image pull gcr.io/k8s-minikube/busybox: (1.602454331s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-774272
E1109 00:03:00.824634  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-774272: (1.251261505s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-774272 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-774272 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m19.937559947s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-774272 image list
helpers_test.go:175: Cleaning up "test-preload-774272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-774272
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-774272: (2.402871011s)
--- PASS: TestPreload (169.75s)

                                                
                                    
x
+
TestScheduledStopUnix (107.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-280934 --memory=2048 --driver=docker  --container-runtime=containerd
E1109 00:04:46.145407  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-280934 --memory=2048 --driver=docker  --container-runtime=containerd: (30.258329084s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-280934 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-280934 -n scheduled-stop-280934
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-280934 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-280934 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-280934 -n scheduled-stop-280934
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-280934
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-280934 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-280934
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-280934: exit status 7 (91.959539ms)

                                                
                                                
-- stdout --
	scheduled-stop-280934
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-280934 -n scheduled-stop-280934
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-280934 -n scheduled-stop-280934: exit status 7 (87.520194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-280934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-280934
E1109 00:06:09.190337  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-280934: (5.106343433s)
--- PASS: TestScheduledStopUnix (107.24s)

                                                
                                    
x
+
TestInsufficientStorage (11.37s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-122603 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-122603 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.707456866s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b857e4f4-3fad-4cc9-81eb-08669c2120b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-122603] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6154038a-4a42-4b34-acf4-9e7dab2becb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17586"}}
	{"specversion":"1.0","id":"7424f49b-94b2-4082-a703-0fec3f803f56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1cd0c42-97cb-44ea-906b-5fde3f7a75e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig"}}
	{"specversion":"1.0","id":"a346bcfd-20d0-4faf-930c-b0882745a84b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube"}}
	{"specversion":"1.0","id":"6860885c-64c1-447b-9b29-765f4e9e6cbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"16152a98-ff30-4167-83ce-0ed5628a8b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ad82c83-eeb3-4cb2-91cb-1c2e4eb31aec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ec078452-6967-4760-8f33-d3b4518b106f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"550b3e46-7c83-44bd-9c73-a204a6bc66ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"14fc794c-7e53-4a19-be95-518664c7374c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f296c81d-b62e-40c5-8aaa-4663ae70aab5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-122603 in cluster insufficient-storage-122603","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5cc1b58-eb9c-437f-9d15-c4c8388a70e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe15450b-d34d-4593-b1c1-a5a4e06b9b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c39f0d2-be07-492a-833b-10485c210bc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-122603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-122603 --output=json --layout=cluster: exit status 7 (323.601717ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-122603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-122603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:06:19.709231  861044 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-122603" does not appear in /home/jenkins/minikube-integration/17586-749551/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-122603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-122603 --output=json --layout=cluster: exit status 7 (339.755645ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-122603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-122603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 00:06:20.049055  861097 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-122603" does not appear in /home/jenkins/minikube-integration/17586-749551/kubeconfig
	E1109 00:06:20.063024  861097 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/insufficient-storage-122603/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-122603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-122603
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-122603: (1.996775863s)
--- PASS: TestInsufficientStorage (11.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.730856654.exe start -p running-upgrade-766390 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.730856654.exe start -p running-upgrade-766390 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.01166519s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-766390 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1109 00:12:25.286385  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-766390 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.13307276s)
helpers_test.go:175: Cleaning up "running-upgrade-766390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-766390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-766390: (3.087414342s)
--- PASS: TestRunningBinaryUpgrade (85.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m8.853835598s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-940244
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-940244: (1.749056163s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-940244 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-940244 status --format={{.Host}}: exit status 7 (128.979961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m45.007681065s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-940244 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (134.956308ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-940244] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-940244
	    minikube start -p kubernetes-upgrade-940244 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9402442 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-940244 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-940244 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.579927383s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-940244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-940244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-940244: (2.257206213s)
--- PASS: TestKubernetesUpgrade (385.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (192.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.1890468417.exe start -p missing-upgrade-095434 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.1890468417.exe start -p missing-upgrade-095434 --memory=2200 --driver=docker  --container-runtime=containerd: (1m33.364818881s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-095434
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-095434: (1.651803762s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-095434
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-095434 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1109 00:08:00.824455  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:08:48.337367  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-095434 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m32.728748093s)
helpers_test.go:175: Cleaning up "missing-upgrade-095434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-095434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-095434: (2.448714422s)
--- PASS: TestMissingContainerUpgrade (192.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (91.823498ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-457115] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-457115 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-457115 --driver=docker  --container-runtime=containerd: (41.900207332s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-457115 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.952342328s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-457115 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-457115 status -o json: exit status 2 (327.189389ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-457115","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-457115
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-457115: (1.948064791s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --driver=docker  --container-runtime=containerd
E1109 00:07:25.286780  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-457115 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.7501069s)
--- PASS: TestNoKubernetes/serial/Start (5.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-457115 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-457115 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.094256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-457115
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-457115: (1.32597758s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-457115 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-457115 --driver=docker  --container-runtime=containerd: (7.298423734s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-457115 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-457115 "sudo systemctl is-active --quiet service kubelet": exit status 1 (396.298246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2002018875.exe start -p stopped-upgrade-222752 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1109 00:09:46.146042  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2002018875.exe start -p stopped-upgrade-222752 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.082444367s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2002018875.exe -p stopped-upgrade-222752 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2002018875.exe -p stopped-upgrade-222752 stop: (20.208838761s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-222752 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-222752 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.953173915s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-222752
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-222752: (1.200241144s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (92.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-819109 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1109 00:13:00.824325  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-819109 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.550040051s)
--- PASS: TestPause/serial/Start (92.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-819109 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-819109 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.316226692s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-819109 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-819109 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-819109 --output=json --layout=cluster: exit status 2 (431.253682ms)

                                                
                                                
-- stdout --
	{"Name":"pause-819109","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-819109","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-819109 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-819109 --alsologtostderr -v=5: (1.156560948s)
--- PASS: TestPause/serial/Unpause (1.16s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.46s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-819109 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-819109 --alsologtostderr -v=5: (1.455992697s)
--- PASS: TestPause/serial/PauseAgain (1.46s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-819109 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-819109 --alsologtostderr -v=5: (3.589864207s)
--- PASS: TestPause/serial/DeletePaused (3.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-819109
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-819109: exit status 1 (72.127561ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-819109: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-901856 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-901856 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (347.668517ms)

                                                
                                                
-- stdout --
	* [false-901856] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 00:14:51.266472  899889 out.go:296] Setting OutFile to fd 1 ...
	I1109 00:14:51.266723  899889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:14:51.266750  899889 out.go:309] Setting ErrFile to fd 2...
	I1109 00:14:51.266769  899889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:14:51.267077  899889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-749551/.minikube/bin
	I1109 00:14:51.267568  899889 out.go:303] Setting JSON to false
	I1109 00:14:51.268744  899889 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25041,"bootTime":1699463851,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1109 00:14:51.268850  899889 start.go:138] virtualization:  
	I1109 00:14:51.273735  899889 out.go:177] * [false-901856] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 00:14:51.275719  899889 out.go:177]   - MINIKUBE_LOCATION=17586
	I1109 00:14:51.275803  899889 notify.go:220] Checking for updates...
	I1109 00:14:51.280266  899889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 00:14:51.286464  899889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-749551/kubeconfig
	I1109 00:14:51.296182  899889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-749551/.minikube
	I1109 00:14:51.300178  899889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 00:14:51.304447  899889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 00:14:51.308913  899889 config.go:182] Loaded profile config "force-systemd-env-100179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:14:51.309103  899889 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 00:14:51.340074  899889 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 00:14:51.340193  899889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 00:14:51.491168  899889 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-11-09 00:14:51.477728751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 00:14:51.491281  899889 docker.go:295] overlay module found
	I1109 00:14:51.496552  899889 out.go:177] * Using the docker driver based on user configuration
	I1109 00:14:51.498473  899889 start.go:298] selected driver: docker
	I1109 00:14:51.498504  899889 start.go:902] validating driver "docker" against <nil>
	I1109 00:14:51.498520  899889 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 00:14:51.500870  899889 out.go:177] 
	W1109 00:14:51.502681  899889 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1109 00:14:51.504395  899889 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-901856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-901856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-901856"

                                                
                                                
----------------------- debugLogs end: false-901856 [took: 5.87461514s] --------------------------------
helpers_test.go:175: Cleaning up "false-901856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-901856
--- PASS: TestNetworkPlugins/group/false (6.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-134656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1109 00:17:25.287121  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1109 00:18:00.825005  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-134656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m7.915799163s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-134656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b27ac156-0f14-4058-ac7e-08448c022ed8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b27ac156-0f14-4058-ac7e-08448c022ed8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.028740107s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-134656 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-134656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-134656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044129329s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-134656 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-134656 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-134656 --alsologtostderr -v=3: (12.320659757s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-881977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-881977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m26.686488869s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-134656 -n old-k8s-version-134656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-134656 -n old-k8s-version-134656: exit status 7 (186.253172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-134656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (668.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-134656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1109 00:19:46.145317  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-134656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m8.162265496s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-134656 -n old-k8s-version-134656
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (668.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-881977 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [157d41fa-d8b8-41b9-bdaf-19141d5c2183] Pending
helpers_test.go:344: "busybox" [157d41fa-d8b8-41b9-bdaf-19141d5c2183] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [157d41fa-d8b8-41b9-bdaf-19141d5c2183] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.037580645s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-881977 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-881977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-881977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.196208786s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-881977 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7n52n" [dc897c23-5f21-468a-93f5-566a0b10508d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.063135874s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7n52n" [dc897c23-5f21-468a-93f5-566a0b10508d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010394386s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-134656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-134656 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-134656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-134656 -n old-k8s-version-134656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-134656 -n old-k8s-version-134656: exit status 2 (381.156945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-134656 -n old-k8s-version-134656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-134656 -n old-k8s-version-134656: exit status 2 (347.258298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-134656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-134656 -n old-k8s-version-134656
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-134656 -n old-k8s-version-134656
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-479416 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-479416 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (55.571476839s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479416 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac2e113c-2165-4816-9822-efb4ad2978da] Pending
helpers_test.go:344: "busybox" [ac2e113c-2165-4816-9822-efb4ad2978da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac2e113c-2165-4816-9822-efb4ad2978da] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.025289156s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-479416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-479416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060902596s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-479416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-479416 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-479416 --alsologtostderr -v=3: (12.093976636s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479416 -n embed-certs-479416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479416 -n embed-certs-479416: exit status 7 (92.165257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-479416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-479416 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:32:25.286061  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1109 00:32:43.875459  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:33:00.824368  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:33:28.244214  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.249533  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.259803  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.280048  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.320325  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.400719  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.561139  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:28.881600  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:29.522385  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:30.803154  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:33.363777  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:38.484924  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:33:48.726033  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:34:09.206555  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:34:46.145226  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
E1109 00:34:50.167705  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
E1109 00:36:12.088166  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-479416 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m33.687785118s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479416 -n embed-certs-479416
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pmdgp" [85bfa105-3d1b-468b-ac9a-abe9daf83727] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pmdgp" [85bfa105-3d1b-468b-ac9a-abe9daf83727] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.025646913s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pmdgp" [85bfa105-3d1b-468b-ac9a-abe9daf83727] Running
E1109 00:37:25.286815  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01505288s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-479416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-479416 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-479416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479416 -n embed-certs-479416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479416 -n embed-certs-479416: exit status 2 (354.850455ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479416 -n embed-certs-479416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479416 -n embed-certs-479416: exit status 2 (399.105948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-479416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479416 -n embed-certs-479416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479416 -n embed-certs-479416
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-495768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:38:00.825250  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:38:28.244479  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-495768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m2.144210129s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-495768 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0eb16ce3-53db-4b4d-9762-fba083dd3053] Pending
helpers_test.go:344: "busybox" [0eb16ce3-53db-4b4d-9762-fba083dd3053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0eb16ce3-53db-4b4d-9762-fba083dd3053] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.032998491s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-495768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-495768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-495768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051230976s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-495768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-495768 --alsologtostderr -v=3
E1109 00:38:55.929194  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-495768 --alsologtostderr -v=3: (12.146980995s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768: exit status 7 (103.000795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-495768 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-495768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-495768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m37.990038916s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z64h5" [e223eb8a-8423-414c-9fb6-0939fb798e5f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z64h5" [e223eb8a-8423-414c-9fb6-0939fb798e5f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.030085701s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z64h5" [e223eb8a-8423-414c-9fb6-0939fb798e5f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010229882s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-495768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-495768 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-495768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768: exit status 2 (358.995205ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768: exit status 2 (363.75762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-495768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-495768 -n default-k8s-diff-port-495768
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-701076 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-701076 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (43.324006529s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-701076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-701076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.271802894s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-701076 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-701076 --alsologtostderr -v=3: (1.284386757s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701076 -n newest-cni-701076
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701076 -n newest-cni-701076: exit status 7 (82.954019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-701076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-701076 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-701076 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (29.511219927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701076 -n newest-cni-701076
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-701076 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-701076 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701076 -n newest-cni-701076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701076 -n newest-cni-701076: exit status 2 (363.138848ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701076 -n newest-cni-701076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701076 -n newest-cni-701076: exit status 2 (353.426762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-701076 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701076 -n newest-cni-701076
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701076 -n newest-cni-701076
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (59.137762705s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-901856 "pgrep -a kubelet"
E1109 00:47:25.286751  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wlfjl" [84d24200-b1f5-4ed0-84e3-5adbe92466d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wlfjl" [84d24200-b1f5-4ed0-84e3-5adbe92466d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.011798646s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.667442854s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qzfpn" [dadf3c41-0c53-4e2f-bf08-5cd4e3da74be] Running
E1109 00:49:23.875887  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030102987s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q878q" [4969b34e-55e1-4340-948c-1cbb0137ec41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q878q" [4969b34e-55e1-4340-948c-1cbb0137ec41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.013609145s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m22.626881076s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1109 00:50:18.039737  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.045181  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.055444  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.075708  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.116017  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.196460  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.356954  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:18.677922  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:19.319062  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:20.599316  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:23.159892  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:28.281076  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:38.521255  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:50:59.002817  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.490092915s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vrrh" [4ec41f3f-3d48-403c-aa48-7b1c33a4e4cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 00:51:20.127375  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/default-k8s-diff-port-495768/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2vrrh" [4ec41f3f-3d48-403c-aa48-7b1c33a4e4cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.025854803s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4zggt" [ba34b38f-f3f5-4385-aa2b-4e752b7a57c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.034445289s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-knx8c" [98d68995-8dc6-45d7-bda9-3a1a78a3356a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-knx8c" [98d68995-8dc6-45d7-bda9-3a1a78a3356a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.011674416s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (52.320139941s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1109 00:52:25.286882  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/ingress-addon-legacy-316909/client.crt: no such file or directory
E1109 00:52:25.641278  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.646478  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.656661  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.677057  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.717259  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.797509  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:25.957696  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:26.278753  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:26.919343  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:28.200094  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:30.761047  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:35.881778  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
E1109 00:52:46.122000  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m2.494546106s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t5g65" [160e9f4d-d237-4f16-bd56-0292014d2213] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t5g65" [160e9f4d-d237-4f16-bd56-0292014d2213] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.013517069s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (16.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-901856 exec deployment/netcat -- nslookup kubernetes.default
E1109 00:53:00.824223  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/addons-118967/client.crt: no such file or directory
E1109 00:53:01.883157  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/no-preload-881977/client.crt: no such file or directory
E1109 00:53:06.602912  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-901856 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.324063265s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (16.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2jjjd" [8512ab84-dab2-466c-a924-9f27eff6fc4b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028687635s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-59cm7" [e30157d3-655f-4527-9817-f8539f732548] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-59cm7" [e30157d3-655f-4527-9817-f8539f732548] Running
E1109 00:53:28.244949  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/old-k8s-version-134656/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011296439s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1109 00:53:47.563905  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/auto-901856/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-901856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m20.415222624s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-901856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-901856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-btrb2" [8153cbed-a2de-4ef5-9594-d8171a0818af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-btrb2" [8153cbed-a2de-4ef5-9594-d8171a0818af] Running
E1109 00:55:03.793991  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/kindnet-901856/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010871155s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-901856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-901856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-403562 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-403562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-403562
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-030385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-030385
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E1109 00:14:46.146309  754902 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-749551/.minikube/profiles/functional-471648/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: kubenet-901856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-901856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-901856"

                                                
                                                
----------------------- debugLogs end: kubenet-901856 [took: 5.217891795s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-901856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-901856
--- SKIP: TestNetworkPlugins/group/kubenet (5.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-901856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-901856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-901856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-901856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901856"

                                                
                                                
----------------------- debugLogs end: cilium-901856 [took: 5.859034094s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-901856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-901856
--- SKIP: TestNetworkPlugins/group/cilium (6.08s)

                                                
                                    
Copied to clipboard