Test Report: Docker_Linux_containerd_arm64 16578

                    
                      d4c33ff371b38c9e245a0eee82030d8958ba8577:2023-06-10:29644
                    
                

Test fail (9/302)

x
+
TestAddons/parallel/Ingress (35.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-048679 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-048679 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-048679 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [12791c73-fb76-456e-ab53-2b14cf9c3c50] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [12791c73-fb76-456e-ab53-2b14cf9c3c50] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.007773541s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-048679 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.061054906s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-048679 addons disable ingress-dns --alsologtostderr -v=1: (1.036744416s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-048679 addons disable ingress --alsologtostderr -v=1: (7.558553909s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-048679
helpers_test.go:235: (dbg) docker inspect addons-048679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f",
	        "Created": "2023-06-10T16:22:36.866976507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T16:22:37.22580717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:22d1eb0c15f6653533a03d8e96fd97e1d685d349b3f4c622bea2e52531ef44b9",
	        "ResolvConfPath": "/var/lib/docker/containers/e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f/hosts",
	        "LogPath": "/var/lib/docker/containers/e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f/e30db12019a63f342ee80d9ab681f812d0358bcdea66740b481cd607510ccb8f-json.log",
	        "Name": "/addons-048679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-048679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-048679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1cfc1b7fbef4b7c244137375168af7fc2d087924add0dd58395adaf83c17932-init/diff:/var/lib/docker/overlay2/74cb6f838e1fcfc1b6f19e3b70ff76db9bef2f6117698ff19da434ce3223b74a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1cfc1b7fbef4b7c244137375168af7fc2d087924add0dd58395adaf83c17932/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1cfc1b7fbef4b7c244137375168af7fc2d087924add0dd58395adaf83c17932/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1cfc1b7fbef4b7c244137375168af7fc2d087924add0dd58395adaf83c17932/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-048679",
	                "Source": "/var/lib/docker/volumes/addons-048679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-048679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-048679",
	                "name.minikube.sigs.k8s.io": "addons-048679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bbd67ec5b2d2334318d58e4a078468345343aa45186118a5f9a86dd21e92327",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bbd67ec5b2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-048679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e30db12019a6",
	                        "addons-048679"
	                    ],
	                    "NetworkID": "cd3935ce22e501b0e718f0b2d8fd2a1ed088820880517f50d051cf4a634fa5dd",
	                    "EndpointID": "32d1a1843521c8b90e9893199423b9d4027c40905af79626f70a5d2d2528de8e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-048679 -n addons-048679
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-048679 logs -n 25: (1.693068231s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-313106   | jenkins | v1.30.1 | 10 Jun 23 16:21 UTC |                     |
	|         | -p download-only-313106        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-313106   | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC |                     |
	|         | -p download-only-313106        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:22 UTC |
	| delete  | -p download-only-313106        | download-only-313106   | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:22 UTC |
	| delete  | -p download-only-313106        | download-only-313106   | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:22 UTC |
	| start   | --download-only -p             | download-docker-637757 | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC |                     |
	|         | download-docker-637757         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-637757      | download-docker-637757 | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:22 UTC |
	| start   | --download-only -p             | binary-mirror-962832   | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC |                     |
	|         | binary-mirror-962832           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36323         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-962832        | binary-mirror-962832   | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:22 UTC |
	| start   | -p addons-048679               | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC | 10 Jun 23 16:24 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:24 UTC | 10 Jun 23 16:24 UTC |
	|         | addons-048679                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:24 UTC | 10 Jun 23 16:24 UTC |
	|         | -p addons-048679               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-048679 ip               | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:24 UTC | 10 Jun 23 16:24 UTC |
	| addons  | addons-048679 addons disable   | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:24 UTC | 10 Jun 23 16:24 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-048679 addons           | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | addons-048679                  |                        |         |         |                     |                     |
	| ssh     | addons-048679 ssh curl -s      | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-048679 ip               | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	| addons  | addons-048679 addons           | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-048679 addons disable   | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-048679 addons disable   | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-048679 addons           | addons-048679          | jenkins | v1.30.1 | 10 Jun 23 16:25 UTC | 10 Jun 23 16:25 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 16:22:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 16:22:14.508142    8032 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:22:14.508286    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:22:14.508294    8032 out.go:309] Setting ErrFile to fd 2...
	I0610 16:22:14.508300    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:22:14.508471    8032 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:22:14.508916    8032 out.go:303] Setting JSON to false
	I0610 16:22:14.509690    8032 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":279,"bootTime":1686413856,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:22:14.509765    8032 start.go:137] virtualization:  
	I0610 16:22:14.512531    8032 out.go:177] * [addons-048679] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0610 16:22:14.514969    8032 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 16:22:14.516854    8032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:22:14.515192    8032 notify.go:220] Checking for updates...
	I0610 16:22:14.520484    8032 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:22:14.522092    8032 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:22:14.524475    8032 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0610 16:22:14.526426    8032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 16:22:14.528528    8032 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 16:22:14.552863    8032 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:22:14.552954    8032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:22:14.636976    8032 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2023-06-10 16:22:14.62593085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:22:14.637080    8032 docker.go:294] overlay module found
	I0610 16:22:14.639034    8032 out.go:177] * Using the docker driver based on user configuration
	I0610 16:22:14.640833    8032 start.go:297] selected driver: docker
	I0610 16:22:14.640848    8032 start.go:875] validating driver "docker" against <nil>
	I0610 16:22:14.640860    8032 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 16:22:14.641454    8032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:22:14.703825    8032 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2023-06-10 16:22:14.69422313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:22:14.703971    8032 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 16:22:14.704185    8032 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 16:22:14.705769    8032 out.go:177] * Using Docker driver with root privileges
	I0610 16:22:14.707839    8032 cni.go:84] Creating CNI manager for ""
	I0610 16:22:14.707860    8032 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:22:14.707870    8032 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 16:22:14.707880    8032 start_flags.go:319] config:
	{Name:addons-048679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-048679 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:22:14.711002    8032 out.go:177] * Starting control plane node addons-048679 in cluster addons-048679
	I0610 16:22:14.712621    8032 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0610 16:22:14.714400    8032 out.go:177] * Pulling base image ...
	I0610 16:22:14.716115    8032 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:14.716162    8032 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0610 16:22:14.716177    8032 cache.go:57] Caching tarball of preloaded images
	I0610 16:22:14.716181    8032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 16:22:14.716247    8032 preload.go:174] Found /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 16:22:14.716257    8032 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on containerd
	I0610 16:22:14.716592    8032 profile.go:148] Saving config to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/config.json ...
	I0610 16:22:14.716624    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/config.json: {Name:mka017fbc9567102ac00ed043b0e102344b07b18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:14.734219    8032 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 16:22:14.734320    8032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 16:22:14.734337    8032 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory, skipping pull
	I0610 16:22:14.734342    8032 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in cache, skipping pull
	I0610 16:22:14.734350    8032 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0610 16:22:14.734355    8032 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b from local cache
	I0610 16:22:29.673230    8032 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b from cached tarball
	I0610 16:22:29.673267    8032 cache.go:195] Successfully downloaded all kic artifacts
	I0610 16:22:29.673303    8032 start.go:364] acquiring machines lock for addons-048679: {Name:mkacbcbeaac9eaceca9d3c50e61bfc7b71003e9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 16:22:29.673438    8032 start.go:368] acquired machines lock for "addons-048679" in 104.663µs
	I0610 16:22:29.673469    8032 start.go:93] Provisioning new machine with config: &{Name:addons-048679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-048679 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0610 16:22:29.673554    8032 start.go:125] createHost starting for "" (driver="docker")
	I0610 16:22:29.675451    8032 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0610 16:22:29.675691    8032 start.go:159] libmachine.API.Create for "addons-048679" (driver="docker")
	I0610 16:22:29.675728    8032 client.go:168] LocalClient.Create starting
	I0610 16:22:29.675843    8032 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem
	I0610 16:22:29.925349    8032 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem
	I0610 16:22:30.282417    8032 cli_runner.go:164] Run: docker network inspect addons-048679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0610 16:22:30.301350    8032 cli_runner.go:211] docker network inspect addons-048679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0610 16:22:30.301444    8032 network_create.go:281] running [docker network inspect addons-048679] to gather additional debugging logs...
	I0610 16:22:30.301460    8032 cli_runner.go:164] Run: docker network inspect addons-048679
	W0610 16:22:30.318765    8032 cli_runner.go:211] docker network inspect addons-048679 returned with exit code 1
	I0610 16:22:30.318790    8032 network_create.go:284] error running [docker network inspect addons-048679]: docker network inspect addons-048679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-048679 not found
	I0610 16:22:30.318801    8032 network_create.go:286] output of [docker network inspect addons-048679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-048679 not found
	
	** /stderr **
	I0610 16:22:30.318870    8032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 16:22:30.337390    8032 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400110b870}
	I0610 16:22:30.337426    8032 network_create.go:123] attempt to create docker network addons-048679 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0610 16:22:30.337486    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-048679 addons-048679
	I0610 16:22:30.411110    8032 network_create.go:107] docker network addons-048679 192.168.49.0/24 created
	I0610 16:22:30.411137    8032 kic.go:117] calculated static IP "192.168.49.2" for the "addons-048679" container
	I0610 16:22:30.411206    8032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 16:22:30.430581    8032 cli_runner.go:164] Run: docker volume create addons-048679 --label name.minikube.sigs.k8s.io=addons-048679 --label created_by.minikube.sigs.k8s.io=true
	I0610 16:22:30.449266    8032 oci.go:103] Successfully created a docker volume addons-048679
	I0610 16:22:30.449354    8032 cli_runner.go:164] Run: docker run --rm --name addons-048679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048679 --entrypoint /usr/bin/test -v addons-048679:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 16:22:32.609562    8032 cli_runner.go:217] Completed: docker run --rm --name addons-048679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048679 --entrypoint /usr/bin/test -v addons-048679:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib: (2.160150185s)
	I0610 16:22:32.609589    8032 oci.go:107] Successfully prepared a docker volume addons-048679
	I0610 16:22:32.609606    8032 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:32.609623    8032 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 16:22:32.609714    8032 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-048679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 16:22:36.781718    8032 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-048679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (4.171944484s)
	I0610 16:22:36.781746    8032 kic.go:199] duration metric: took 4.172120 seconds to extract preloaded images to volume
	W0610 16:22:36.781883    8032 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 16:22:36.782013    8032 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 16:22:36.850741    8032 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-048679 --name addons-048679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-048679 --network addons-048679 --ip 192.168.49.2 --volume addons-048679:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 16:22:37.233783    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Running}}
	I0610 16:22:37.258763    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:22:37.289169    8032 cli_runner.go:164] Run: docker exec addons-048679 stat /var/lib/dpkg/alternatives/iptables
	I0610 16:22:37.379423    8032 oci.go:144] the created container "addons-048679" has a running status.
	I0610 16:22:37.379446    8032 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa...
	I0610 16:22:37.537485    8032 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 16:22:37.582711    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:22:37.609984    8032 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 16:22:37.610003    8032 kic_runner.go:114] Args: [docker exec --privileged addons-048679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 16:22:37.690414    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:22:37.728699    8032 machine.go:88] provisioning docker machine ...
	I0610 16:22:37.728732    8032 ubuntu.go:169] provisioning hostname "addons-048679"
	I0610 16:22:37.728801    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:37.782742    8032 main.go:141] libmachine: Using SSH client type: native
	I0610 16:22:37.783192    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0610 16:22:37.783203    8032 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-048679 && echo "addons-048679" | sudo tee /etc/hostname
	I0610 16:22:37.783798    8032 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51454->127.0.0.1:32772: read: connection reset by peer
	I0610 16:22:40.940787    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-048679
	
	I0610 16:22:40.940862    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:40.960014    8032 main.go:141] libmachine: Using SSH client type: native
	I0610 16:22:40.960449    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0610 16:22:40.960473    8032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-048679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-048679/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-048679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 16:22:41.103938    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 16:22:41.103961    8032 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16578-2220/.minikube CaCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16578-2220/.minikube}
	I0610 16:22:41.103996    8032 ubuntu.go:177] setting up certificates
	I0610 16:22:41.104005    8032 provision.go:83] configureAuth start
	I0610 16:22:41.104064    8032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048679
	I0610 16:22:41.128479    8032 provision.go:138] copyHostCerts
	I0610 16:22:41.128554    8032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/cert.pem (1123 bytes)
	I0610 16:22:41.128684    8032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/key.pem (1675 bytes)
	I0610 16:22:41.128758    8032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/ca.pem (1078 bytes)
	I0610 16:22:41.128808    8032 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem org=jenkins.addons-048679 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-048679]
	I0610 16:22:41.348416    8032 provision.go:172] copyRemoteCerts
	I0610 16:22:41.348497    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 16:22:41.348540    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:41.368301    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:22:41.469335    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 16:22:41.498645    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 16:22:41.528152    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 16:22:41.556777    8032 provision.go:86] duration metric: configureAuth took 452.736818ms
	I0610 16:22:41.556800    8032 ubuntu.go:193] setting minikube options for container-runtime
	I0610 16:22:41.556994    8032 config.go:182] Loaded profile config "addons-048679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:22:41.557006    8032 machine.go:91] provisioned docker machine in 3.82829226s
	I0610 16:22:41.557012    8032 client.go:171] LocalClient.Create took 11.881274332s
	I0610 16:22:41.557029    8032 start.go:167] duration metric: libmachine.API.Create for "addons-048679" took 11.88133861s
	I0610 16:22:41.557039    8032 start.go:300] post-start starting for "addons-048679" (driver="docker")
	I0610 16:22:41.557045    8032 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 16:22:41.557096    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 16:22:41.557139    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:41.576522    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:22:41.678406    8032 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 16:22:41.682828    8032 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 16:22:41.682863    8032 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 16:22:41.682874    8032 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 16:22:41.682881    8032 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 16:22:41.682890    8032 filesync.go:126] Scanning /home/jenkins/minikube-integration/16578-2220/.minikube/addons for local assets ...
	I0610 16:22:41.682964    8032 filesync.go:126] Scanning /home/jenkins/minikube-integration/16578-2220/.minikube/files for local assets ...
	I0610 16:22:41.682990    8032 start.go:303] post-start completed in 125.944833ms
	I0610 16:22:41.683340    8032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048679
	I0610 16:22:41.704009    8032 profile.go:148] Saving config to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/config.json ...
	I0610 16:22:41.704310    8032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 16:22:41.704365    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:41.722129    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:22:41.821415    8032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 16:22:41.827323    8032 start.go:128] duration metric: createHost completed in 12.15375433s
	I0610 16:22:41.827385    8032 start.go:83] releasing machines lock for "addons-048679", held for 12.153933676s
	I0610 16:22:41.827487    8032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048679
	I0610 16:22:41.848462    8032 ssh_runner.go:195] Run: cat /version.json
	I0610 16:22:41.848487    8032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 16:22:41.848517    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:41.848547    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:22:41.875350    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:22:41.876330    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:22:41.971414    8032 ssh_runner.go:195] Run: systemctl --version
	I0610 16:22:42.119211    8032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 16:22:42.125058    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0610 16:22:42.156459    8032 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0610 16:22:42.156533    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 16:22:42.193180    8032 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 16:22:42.193203    8032 start.go:481] detecting cgroup driver to use...
	I0610 16:22:42.193246    8032 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 16:22:42.193305    8032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 16:22:42.207789    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 16:22:42.221919    8032 docker.go:193] disabling cri-docker service (if available) ...
	I0610 16:22:42.222002    8032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 16:22:42.238565    8032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 16:22:42.255422    8032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 16:22:42.350639    8032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 16:22:42.442343    8032 docker.go:209] disabling docker service ...
	I0610 16:22:42.442504    8032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 16:22:42.463388    8032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 16:22:42.478479    8032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 16:22:42.570686    8032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 16:22:42.673190    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 16:22:42.686572    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 16:22:42.706912    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 16:22:42.718692    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 16:22:42.730182    8032 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 16:22:42.730288    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 16:22:42.742427    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 16:22:42.754254    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 16:22:42.766199    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 16:22:42.778344    8032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 16:22:42.789824    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 16:22:42.801465    8032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 16:22:42.811569    8032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 16:22:42.821455    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 16:22:42.915651    8032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 16:22:42.996111    8032 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0610 16:22:42.996255    8032 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0610 16:22:43.001814    8032 start.go:549] Will wait 60s for crictl version
	I0610 16:22:43.001967    8032 ssh_runner.go:195] Run: which crictl
	I0610 16:22:43.007553    8032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 16:22:43.069852    8032 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0610 16:22:43.070010    8032 ssh_runner.go:195] Run: containerd --version
	I0610 16:22:43.103586    8032 ssh_runner.go:195] Run: containerd --version
	I0610 16:22:43.137521    8032 out.go:177] * Preparing Kubernetes v1.27.2 on containerd 1.6.21 ...
	I0610 16:22:43.139245    8032 cli_runner.go:164] Run: docker network inspect addons-048679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 16:22:43.156903    8032 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0610 16:22:43.161321    8032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 16:22:43.176324    8032 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:43.176396    8032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 16:22:43.219837    8032 containerd.go:604] all images are preloaded for containerd runtime.
	I0610 16:22:43.219858    8032 containerd.go:518] Images already preloaded, skipping extraction
	I0610 16:22:43.219913    8032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 16:22:43.259935    8032 containerd.go:604] all images are preloaded for containerd runtime.
	I0610 16:22:43.259955    8032 cache_images.go:84] Images are preloaded, skipping loading
	I0610 16:22:43.260011    8032 ssh_runner.go:195] Run: sudo crictl info
	I0610 16:22:43.303462    8032 cni.go:84] Creating CNI manager for ""
	I0610 16:22:43.303486    8032 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:22:43.303498    8032 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 16:22:43.303516    8032 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-048679 NodeName:addons-048679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 16:22:43.303646    8032 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-048679"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 16:22:43.303720    8032 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-048679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-048679 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 16:22:43.303788    8032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 16:22:43.314299    8032 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 16:22:43.314365    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 16:22:43.324590    8032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0610 16:22:43.345563    8032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 16:22:43.367412    8032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0610 16:22:43.389460    8032 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0610 16:22:43.394138    8032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 16:22:43.407527    8032 certs.go:56] Setting up /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679 for IP: 192.168.49.2
	I0610 16:22:43.407555    8032 certs.go:190] acquiring lock for shared ca certs: {Name:mke388f9dea4ce5085a6492ed88d04b6a5be93b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:43.407721    8032 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key
	I0610 16:22:43.590073    8032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt ...
	I0610 16:22:43.590102    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt: {Name:mkd932577e34767033c848f726a0d8271367cbb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:43.590305    8032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key ...
	I0610 16:22:43.590319    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key: {Name:mk1947b222f0e5a9da7914b05065e88a073638f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:43.590403    8032 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key
	I0610 16:22:44.227244    8032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.crt ...
	I0610 16:22:44.227277    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.crt: {Name:mk30434f3a6ea277637496b4a75409e67031731a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.227459    8032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key ...
	I0610 16:22:44.227471    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key: {Name:mk58d6cc825ac025bb88fec22727e88fa0850091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.227583    8032 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.key
	I0610 16:22:44.227601    8032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt with IP's: []
	I0610 16:22:44.513643    8032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt ...
	I0610 16:22:44.513672    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: {Name:mk112374d2fe00ae2d859ea94d458383389bab1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.513877    8032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.key ...
	I0610 16:22:44.513894    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.key: {Name:mkb3039ac9e40777c31462d9b15f9a4ff7ae394a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.513976    8032 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key.dd3b5fb2
	I0610 16:22:44.513994    8032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 16:22:44.841798    8032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt.dd3b5fb2 ...
	I0610 16:22:44.841833    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt.dd3b5fb2: {Name:mk93df0cbb9c85b28d76c45ff44a5a529ce48873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.842060    8032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key.dd3b5fb2 ...
	I0610 16:22:44.842077    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key.dd3b5fb2: {Name:mk943f1198700914bbc15a48f0b9f0d913d2e709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:44.842157    8032 certs.go:337] copying /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt
	I0610 16:22:44.842233    8032 certs.go:341] copying /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key
	I0610 16:22:44.842285    8032 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.key
	I0610 16:22:44.842303    8032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.crt with IP's: []
	I0610 16:22:45.434031    8032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.crt ...
	I0610 16:22:45.434060    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.crt: {Name:mk9ff7839dcc26aa7110d3105ac109a790d495de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:45.434248    8032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.key ...
	I0610 16:22:45.434259    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.key: {Name:mk703838b1cb3cffe31a7fe9bb5040e6eb61f21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:22:45.434460    8032 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 16:22:45.434499    8032 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem (1078 bytes)
	I0610 16:22:45.434541    8032 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem (1123 bytes)
	I0610 16:22:45.434569    8032 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem (1675 bytes)
	I0610 16:22:45.435289    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 16:22:45.465434    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 16:22:45.496050    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 16:22:45.525108    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 16:22:45.555308    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 16:22:45.583812    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 16:22:45.612256    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 16:22:45.641307    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 16:22:45.670353    8032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 16:22:45.700388    8032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 16:22:45.723306    8032 ssh_runner.go:195] Run: openssl version
	I0610 16:22:45.730628    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 16:22:45.742793    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:22:45.747536    8032 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:22:45.747645    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:22:45.756205    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 16:22:45.768511    8032 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 16:22:45.773120    8032 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 16:22:45.773207    8032 kubeadm.go:404] StartCluster: {Name:addons-048679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-048679 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:22:45.773302    8032 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0610 16:22:45.773376    8032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 16:22:45.815493    8032 cri.go:88] found id: ""
	I0610 16:22:45.815598    8032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 16:22:45.826281    8032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 16:22:45.836863    8032 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0610 16:22:45.836965    8032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 16:22:45.847840    8032 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 16:22:45.847909    8032 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0610 16:22:45.904238    8032 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 16:22:45.904491    8032 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 16:22:45.949407    8032 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0610 16:22:45.949477    8032 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0610 16:22:45.949514    8032 kubeadm.go:322] OS: Linux
	I0610 16:22:45.949562    8032 kubeadm.go:322] CGROUPS_CPU: enabled
	I0610 16:22:45.949613    8032 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0610 16:22:45.949662    8032 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0610 16:22:45.949719    8032 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0610 16:22:45.949768    8032 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0610 16:22:45.949819    8032 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0610 16:22:45.949866    8032 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0610 16:22:45.949915    8032 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0610 16:22:45.949963    8032 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0610 16:22:46.033634    8032 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 16:22:46.033820    8032 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 16:22:46.033947    8032 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 16:22:46.282870    8032 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 16:22:46.286504    8032 out.go:204]   - Generating certificates and keys ...
	I0610 16:22:46.286608    8032 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 16:22:46.286679    8032 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 16:22:46.528728    8032 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 16:22:46.704537    8032 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 16:22:48.249296    8032 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 16:22:48.655687    8032 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 16:22:48.935344    8032 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 16:22:48.935748    8032 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-048679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 16:22:49.331165    8032 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 16:22:49.331516    8032 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-048679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 16:22:49.504218    8032 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 16:22:50.037807    8032 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 16:22:50.354704    8032 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 16:22:50.354952    8032 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 16:22:51.584460    8032 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 16:22:52.204843    8032 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 16:22:52.725215    8032 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 16:22:53.044310    8032 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 16:22:53.061958    8032 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 16:22:53.062053    8032 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 16:22:53.062091    8032 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 16:22:53.171066    8032 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 16:22:53.173599    8032 out.go:204]   - Booting up control plane ...
	I0610 16:22:53.173725    8032 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 16:22:53.173877    8032 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 16:22:53.173942    8032 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 16:22:53.176850    8032 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 16:22:53.177048    8032 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 16:23:04.179745    8032 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.003005 seconds
	I0610 16:23:04.179862    8032 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 16:23:04.196033    8032 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 16:23:04.721922    8032 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 16:23:04.722109    8032 kubeadm.go:322] [mark-control-plane] Marking the node addons-048679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 16:23:05.233742    8032 kubeadm.go:322] [bootstrap-token] Using token: pgo8qy.n1k5c9mdxp4r7uym
	I0610 16:23:05.235377    8032 out.go:204]   - Configuring RBAC rules ...
	I0610 16:23:05.235523    8032 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 16:23:05.241196    8032 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 16:23:05.250469    8032 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 16:23:05.254425    8032 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 16:23:05.258536    8032 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 16:23:05.262629    8032 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 16:23:05.281114    8032 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 16:23:05.516002    8032 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 16:23:05.647484    8032 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 16:23:05.648976    8032 kubeadm.go:322] 
	I0610 16:23:05.649048    8032 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 16:23:05.649063    8032 kubeadm.go:322] 
	I0610 16:23:05.649136    8032 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 16:23:05.649144    8032 kubeadm.go:322] 
	I0610 16:23:05.649169    8032 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 16:23:05.649389    8032 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 16:23:05.649449    8032 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 16:23:05.649459    8032 kubeadm.go:322] 
	I0610 16:23:05.649510    8032 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 16:23:05.649516    8032 kubeadm.go:322] 
	I0610 16:23:05.649561    8032 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 16:23:05.649567    8032 kubeadm.go:322] 
	I0610 16:23:05.649616    8032 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 16:23:05.649695    8032 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 16:23:05.649763    8032 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 16:23:05.649772    8032 kubeadm.go:322] 
	I0610 16:23:05.649851    8032 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 16:23:05.649927    8032 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 16:23:05.649936    8032 kubeadm.go:322] 
	I0610 16:23:05.650015    8032 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pgo8qy.n1k5c9mdxp4r7uym \
	I0610 16:23:05.650115    8032 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5bc14f008eafd77d085ab1d9d6f7c71ae8f6d38083eb171c3f7e9c167a550f4a \
	I0610 16:23:05.650139    8032 kubeadm.go:322] 	--control-plane 
	I0610 16:23:05.650147    8032 kubeadm.go:322] 
	I0610 16:23:05.650227    8032 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 16:23:05.650235    8032 kubeadm.go:322] 
	I0610 16:23:05.650476    8032 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pgo8qy.n1k5c9mdxp4r7uym \
	I0610 16:23:05.650590    8032 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5bc14f008eafd77d085ab1d9d6f7c71ae8f6d38083eb171c3f7e9c167a550f4a 
	I0610 16:23:05.654601    8032 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0610 16:23:05.654716    8032 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 16:23:05.654879    8032 kubeadm.go:322] W0610 16:22:46.033098     896 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 16:23:05.655038    8032 kubeadm.go:322] W0610 16:22:53.174053     896 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 16:23:05.655056    8032 cni.go:84] Creating CNI manager for ""
	I0610 16:23:05.655066    8032 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:23:05.657202    8032 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 16:23:05.659238    8032 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 16:23:05.667130    8032 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 16:23:05.667150    8032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 16:23:05.699288    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 16:23:06.661322    8032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 16:23:06.661409    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:06.661511    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-048679 minikube.k8s.io/updated_at=2023_06_10T16_23_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:06.899137    8032 ops.go:34] apiserver oom_adj: -16
	I0610 16:23:06.899220    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:07.498289    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:07.997837    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:08.498561    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:08.998517    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:09.498645    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:09.998605    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:10.498223    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:10.998675    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:11.498292    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:11.998552    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:12.498457    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:12.998217    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:13.498590    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:13.998026    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:14.497907    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:14.998061    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:15.498395    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:15.997629    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:16.497670    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:16.997742    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:17.498214    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:17.997969    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:18.498143    8032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:23:18.761035    8032 kubeadm.go:1076] duration metric: took 12.099687429s to wait for elevateKubeSystemPrivileges.
	I0610 16:23:18.761063    8032 kubeadm.go:406] StartCluster complete in 32.987858941s
	I0610 16:23:18.761078    8032 settings.go:142] acquiring lock: {Name:mka1eca2c16888376cc44d7f55f3d7e369175085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:23:18.761185    8032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:23:18.761611    8032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/kubeconfig: {Name:mk9761da47d382771738f32de309583d22d7ff06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:23:18.762011    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 16:23:18.762279    8032 config.go:182] Loaded profile config "addons-048679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:23:18.762316    8032 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 16:23:18.762455    8032 addons.go:66] Setting volumesnapshots=true in profile "addons-048679"
	I0610 16:23:18.762468    8032 addons.go:228] Setting addon volumesnapshots=true in "addons-048679"
	I0610 16:23:18.762533    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.763080    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.763295    8032 addons.go:66] Setting ingress=true in profile "addons-048679"
	I0610 16:23:18.763317    8032 addons.go:228] Setting addon ingress=true in "addons-048679"
	I0610 16:23:18.763371    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.763755    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.763824    8032 addons.go:66] Setting cloud-spanner=true in profile "addons-048679"
	I0610 16:23:18.763838    8032 addons.go:228] Setting addon cloud-spanner=true in "addons-048679"
	I0610 16:23:18.763862    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.764199    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.764268    8032 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-048679"
	I0610 16:23:18.764296    8032 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-048679"
	I0610 16:23:18.764321    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.764663    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.764738    8032 addons.go:66] Setting default-storageclass=true in profile "addons-048679"
	I0610 16:23:18.764752    8032 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-048679"
	I0610 16:23:18.764962    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.765013    8032 addons.go:66] Setting gcp-auth=true in profile "addons-048679"
	I0610 16:23:18.765024    8032 mustload.go:65] Loading cluster: addons-048679
	I0610 16:23:18.765158    8032 config.go:182] Loaded profile config "addons-048679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:23:18.765345    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.765397    8032 addons.go:66] Setting metrics-server=true in profile "addons-048679"
	I0610 16:23:18.765406    8032 addons.go:228] Setting addon metrics-server=true in "addons-048679"
	I0610 16:23:18.765429    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.765757    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.765815    8032 addons.go:66] Setting ingress-dns=true in profile "addons-048679"
	I0610 16:23:18.765824    8032 addons.go:228] Setting addon ingress-dns=true in "addons-048679"
	I0610 16:23:18.765852    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.766185    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.766235    8032 addons.go:66] Setting inspektor-gadget=true in profile "addons-048679"
	I0610 16:23:18.766243    8032 addons.go:228] Setting addon inspektor-gadget=true in "addons-048679"
	I0610 16:23:18.766263    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.767071    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.767221    8032 addons.go:66] Setting registry=true in profile "addons-048679"
	I0610 16:23:18.767253    8032 addons.go:228] Setting addon registry=true in "addons-048679"
	I0610 16:23:18.767285    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.767644    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.779624    8032 addons.go:66] Setting storage-provisioner=true in profile "addons-048679"
	I0610 16:23:18.779656    8032 addons.go:228] Setting addon storage-provisioner=true in "addons-048679"
	I0610 16:23:18.779699    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:18.780129    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:18.828622    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0610 16:23:18.845462    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0610 16:23:18.845485    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0610 16:23:18.845558    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:18.886704    8032 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	I0610 16:23:18.890659    8032 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0610 16:23:18.890685    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0610 16:23:18.890758    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:18.984596    8032 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0610 16:23:18.990666    8032 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:23:18.992179    8032 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 16:23:18.991780    8032 out.go:177]   - Using image docker.io/registry:2.8.1
	I0610 16:23:18.998221    8032 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0610 16:23:18.996815    8032 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 16:23:19.003489    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 16:23:19.003607    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.015848    8032 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0610 16:23:19.015875    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0610 16:23:19.015941    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.035301    8032 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 16:23:19.037505    8032 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 16:23:19.037525    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0610 16:23:19.037588    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.064238    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 16:23:19.066312    8032 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 16:23:19.068037    8032 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 16:23:19.068055    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 16:23:19.068131    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.075328    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 16:23:19.093794    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 16:23:19.096160    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 16:23:19.098264    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 16:23:19.097137    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.123600    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 16:23:19.122310    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:19.126036    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 16:23:19.128347    8032 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 16:23:19.130558    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 16:23:19.130578    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 16:23:19.130643    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.140242    8032 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0610 16:23:19.142005    8032 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 16:23:19.142025    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 16:23:19.142093    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.150693    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.182361    8032 addons.go:228] Setting addon default-storageclass=true in "addons-048679"
	I0610 16:23:19.182603    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:19.183171    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:19.189443    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	W0610 16:23:19.205547    8032 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-048679" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0610 16:23:19.205634    8032 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0610 16:23:19.205654    8032 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0610 16:23:19.207390    8032 out.go:177] * Verifying Kubernetes components...
	I0610 16:23:19.209237    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:23:19.251184    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.252384    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.256488    8032 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 16:23:19.258716    8032 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 16:23:19.258735    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 16:23:19.258809    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.297147    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.314148    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.331421    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.358842    8032 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 16:23:19.358862    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 16:23:19.358924    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:19.390596    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.407738    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 16:23:19.420316    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:19.510628    8032 node_ready.go:35] waiting up to 6m0s for node "addons-048679" to be "Ready" ...
	I0610 16:23:19.520006    8032 node_ready.go:49] node "addons-048679" has status "Ready":"True"
	I0610 16:23:19.520078    8032 node_ready.go:38] duration metric: took 9.425094ms waiting for node "addons-048679" to be "Ready" ...
	I0610 16:23:19.520101    8032 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 16:23:19.553637    8032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9drks" in "kube-system" namespace to be "Ready" ...
	I0610 16:23:19.668784    8032 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0610 16:23:19.668843    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0610 16:23:19.699210    8032 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0610 16:23:19.699280    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0610 16:23:19.761720    8032 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0610 16:23:19.761786    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0610 16:23:19.767845    8032 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0610 16:23:19.767900    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0610 16:23:19.823154    8032 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 16:23:19.823228    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0610 16:23:19.829227    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 16:23:19.834893    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 16:23:19.843728    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 16:23:19.882541    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 16:23:19.893699    8032 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0610 16:23:19.893722    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0610 16:23:19.903917    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 16:23:19.909359    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 16:23:19.909383    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 16:23:19.927351    8032 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0610 16:23:19.927374    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0610 16:23:20.013921    8032 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0610 16:23:20.013944    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0610 16:23:20.028430    8032 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 16:23:20.028454    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 16:23:20.069938    8032 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0610 16:23:20.069961    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0610 16:23:20.088447    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 16:23:20.088474    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 16:23:20.131559    8032 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0610 16:23:20.131584    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0610 16:23:20.181684    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0610 16:23:20.237008    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0610 16:23:20.237033    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0610 16:23:20.284995    8032 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 16:23:20.285017    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 16:23:20.315252    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 16:23:20.315273    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 16:23:20.419776    8032 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 16:23:20.419800    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0610 16:23:20.441054    8032 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0610 16:23:20.441079    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0610 16:23:20.487789    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 16:23:20.603199    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 16:23:20.603224    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 16:23:20.683183    8032 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0610 16:23:20.683207    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0610 16:23:20.792872    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 16:23:21.053456    8032 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 16:23:21.053479    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 16:23:21.117262    8032 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 16:23:21.117284    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0610 16:23:21.343955    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 16:23:21.343977    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 16:23:21.444639    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 16:23:21.567453    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:21.599699    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 16:23:21.599724    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 16:23:21.765534    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 16:23:21.765558    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 16:23:21.950365    8032 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.542580937s)
	I0610 16:23:21.950394    8032 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0610 16:23:22.080621    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 16:23:22.080646    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 16:23:22.250605    8032 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 16:23:22.250658    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 16:23:22.400185    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 16:23:23.617114    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:25.650820    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:25.803633    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.974370392s)
	I0610 16:23:25.803707    8032 addons.go:464] Verifying addon ingress=true in "addons-048679"
	I0610 16:23:25.812779    8032 out.go:177] * Verifying ingress addon...
	I0610 16:23:25.803951    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.969037973s)
	I0610 16:23:25.804003    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.960249787s)
	I0610 16:23:25.804026    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.921466532s)
	I0610 16:23:25.804050    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.900112835s)
	I0610 16:23:25.804081    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.622372212s)
	I0610 16:23:25.804130    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.316316214s)
	I0610 16:23:25.804204    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.011306615s)
	I0610 16:23:25.804249    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.359583384s)
	I0610 16:23:25.820941    8032 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 16:23:25.821189    8032 addons.go:464] Verifying addon registry=true in "addons-048679"
	I0610 16:23:25.826005    8032 out.go:177] * Verifying registry addon...
	W0610 16:23:25.821382    8032 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 16:23:25.821393    8032 addons.go:464] Verifying addon metrics-server=true in "addons-048679"
	I0610 16:23:25.827601    8032 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 16:23:25.827816    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:25.828586    8032 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 16:23:25.828758    8032 retry.go:31] will retry after 171.868452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 16:23:25.838610    8032 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 16:23:25.838636    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:25.938993    8032 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 16:23:25.939069    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:25.970393    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:26.001645    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 16:23:26.225064    8032 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 16:23:26.276337    8032 addons.go:228] Setting addon gcp-auth=true in "addons-048679"
	I0610 16:23:26.276422    8032 host.go:66] Checking if "addons-048679" exists ...
	I0610 16:23:26.276913    8032 cli_runner.go:164] Run: docker container inspect addons-048679 --format={{.State.Status}}
	I0610 16:23:26.304585    8032 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 16:23:26.304659    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048679
	I0610 16:23:26.339425    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:26.344301    8032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/addons-048679/id_rsa Username:docker}
	I0610 16:23:26.347339    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:26.840097    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:26.859060    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:27.352523    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:27.376427    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:27.528889    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.128653588s)
	I0610 16:23:27.528966    8032 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-048679"
	I0610 16:23:27.531252    8032 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 16:23:27.534844    8032 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 16:23:27.545386    8032 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 16:23:27.545458    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:27.832984    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:27.844558    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:28.050299    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.048609375s)
	I0610 16:23:28.050413    8032 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.74580768s)
	I0610 16:23:28.054487    8032 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 16:23:28.053582    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:28.058716    8032 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 16:23:28.060756    8032 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 16:23:28.060804    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 16:23:28.067033    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:28.087372    8032 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 16:23:28.087434    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 16:23:28.113107    8032 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 16:23:28.113169    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 16:23:28.140318    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 16:23:28.333631    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:28.343822    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:28.552402    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:28.836038    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:28.845713    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:28.987118    8032 addons.go:464] Verifying addon gcp-auth=true in "addons-048679"
	I0610 16:23:28.989422    8032 out.go:177] * Verifying gcp-auth addon...
	I0610 16:23:28.991828    8032 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 16:23:29.008133    8032 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 16:23:29.008193    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:29.052232    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:29.332628    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:29.343993    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:29.512827    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:29.552514    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:29.833405    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:29.844448    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:30.019074    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:30.053826    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:30.068960    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:30.333475    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:30.344131    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:30.512604    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:30.552298    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:30.833510    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:30.844848    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:31.014331    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:31.052306    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:31.333123    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:31.343621    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:31.512836    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:31.550963    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:31.834819    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:31.846235    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:32.013458    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:32.052701    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:32.333869    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:32.346783    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:32.516297    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:32.567737    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:32.580014    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:32.836222    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:32.846272    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:33.013429    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:33.052594    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:33.334267    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:33.346827    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:33.515110    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:33.551536    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:33.833307    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:33.844334    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:34.013863    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:34.053566    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:34.332538    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:34.344187    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:34.512455    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:34.552287    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:34.832609    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:34.844495    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:35.013021    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:35.051789    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:35.082944    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:35.332871    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:35.343485    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:35.512414    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:35.551222    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:35.832667    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:35.843298    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:36.011985    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:36.052585    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:36.333393    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:36.344591    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:36.512653    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:36.551754    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:36.832857    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:36.844241    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:37.012861    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:37.052895    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:37.332911    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:37.343674    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:37.512830    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:37.552485    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:37.567476    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:37.833613    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:37.843657    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:38.013035    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:38.052347    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:38.332941    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:38.344814    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:38.513482    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:38.552821    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:38.832506    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:38.844474    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:39.018376    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:39.052328    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:39.332346    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:39.344224    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:39.512474    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:39.552076    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:39.832278    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:39.844587    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:40.016703    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:40.051716    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:40.067855    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:40.333243    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:40.348055    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:40.511899    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:40.552377    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:40.833017    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:40.845705    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:41.013220    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:41.054893    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:41.332952    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:41.343750    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:41.512536    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:41.551210    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:41.843745    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:41.852862    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:42.012852    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:42.051807    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:42.333068    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:42.343876    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:42.511930    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:42.551183    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:42.566659    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:42.832411    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:42.844238    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:43.012799    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:43.051598    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:43.332842    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:43.343715    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:43.512564    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:43.551795    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:43.832966    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:43.844268    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:44.021089    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:44.050911    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:44.335989    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:44.343784    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:44.512376    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:44.552177    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:44.832584    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:44.843216    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:45.012394    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:45.051755    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:45.067454    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:45.333521    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:45.344144    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:45.512142    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:45.551335    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:45.833791    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:45.845103    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:46.012356    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:46.053036    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:46.333292    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:46.344006    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:46.512324    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:46.551836    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:46.833014    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:46.843969    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:47.012812    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:47.052939    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:47.333236    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:47.344180    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:47.512487    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:47.551851    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:47.567740    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:47.833141    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:47.844549    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:48.012586    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:48.051628    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:48.332573    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:48.343548    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:48.512958    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:48.550750    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:48.832760    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:48.843812    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:49.011762    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:49.052353    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:49.333061    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:49.344110    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:49.512241    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:49.552035    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:49.832792    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:49.844394    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:50.012139    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:50.052418    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:50.066767    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:50.332502    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:50.344043    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:50.517320    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:50.552085    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:50.832788    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:50.844090    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:51.012184    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:51.051441    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:51.332181    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:51.346335    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:51.512335    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:51.551445    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:51.833238    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:51.844731    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:52.012376    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:52.051538    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:52.332120    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:52.343676    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:52.512418    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:52.551493    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:52.567566    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:52.841260    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:52.846130    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:53.013117    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:53.051535    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:53.332492    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:53.344351    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:53.513066    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:53.552177    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:53.832807    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:53.843303    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:54.012629    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:54.054699    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:54.332690    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:54.343413    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:54.512546    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:54.551590    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:54.833034    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:54.843947    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:55.012191    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:55.052251    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:55.067739    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:55.335075    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:55.344101    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:55.514024    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:55.551847    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:55.833134    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:55.843746    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:56.013030    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:56.051856    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:56.332775    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:56.344035    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:56.511745    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:56.553668    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:56.832764    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:56.844324    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:57.012982    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:57.051606    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:57.333225    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:57.344161    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:57.512228    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:57.551622    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:57.567279    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:57.833752    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:57.844604    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:58.019399    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:58.053823    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:58.332768    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:58.343243    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:58.519727    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:58.556537    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:58.832640    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:58.843111    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:59.011946    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:59.051523    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:59.333540    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:59.344739    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:23:59.512871    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:23:59.551515    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:23:59.567353    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:23:59.832864    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:23:59.843568    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:00.051492    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:00.118359    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:00.333487    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:00.344090    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:00.512263    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:00.551356    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:00.834234    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:00.843926    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:01.012328    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:01.057180    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:01.333062    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:01.344152    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:01.512414    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:01.552208    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:01.574904    8032 pod_ready.go:102] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"False"
	I0610 16:24:01.833179    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:01.844259    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:02.012583    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:02.058125    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:02.333725    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:02.343850    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:02.514213    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:02.552267    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:02.567298    8032 pod_ready.go:92] pod "coredns-5d78c9869d-9drks" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.567326    8032 pod_ready.go:81] duration metric: took 43.0136169s waiting for pod "coredns-5d78c9869d-9drks" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.567338    8032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qmk44" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.574105    8032 pod_ready.go:92] pod "coredns-5d78c9869d-qmk44" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.574130    8032 pod_ready.go:81] duration metric: took 6.7625ms waiting for pod "coredns-5d78c9869d-qmk44" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.574144    8032 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.580009    8032 pod_ready.go:92] pod "etcd-addons-048679" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.580048    8032 pod_ready.go:81] duration metric: took 5.897021ms waiting for pod "etcd-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.580079    8032 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.585914    8032 pod_ready.go:92] pod "kube-apiserver-addons-048679" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.585937    8032 pod_ready.go:81] duration metric: took 5.845961ms waiting for pod "kube-apiserver-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.585949    8032 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.592384    8032 pod_ready.go:92] pod "kube-controller-manager-addons-048679" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.592407    8032 pod_ready.go:81] duration metric: took 6.45037ms waiting for pod "kube-controller-manager-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.592420    8032 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wpw2t" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.833151    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:02.843989    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:02.965024    8032 pod_ready.go:92] pod "kube-proxy-wpw2t" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:02.965088    8032 pod_ready.go:81] duration metric: took 372.659917ms waiting for pod "kube-proxy-wpw2t" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:02.965109    8032 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:03.012965    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:03.051097    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:03.332700    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:03.343422    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:03.363879    8032 pod_ready.go:92] pod "kube-scheduler-addons-048679" in "kube-system" namespace has status "Ready":"True"
	I0610 16:24:03.363905    8032 pod_ready.go:81] duration metric: took 398.787187ms waiting for pod "kube-scheduler-addons-048679" in "kube-system" namespace to be "Ready" ...
	I0610 16:24:03.363914    8032 pod_ready.go:38] duration metric: took 43.843789862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 16:24:03.363950    8032 api_server.go:52] waiting for apiserver process to appear ...
	I0610 16:24:03.364025    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 16:24:03.381270    8032 api_server.go:72] duration metric: took 44.175587777s to wait for apiserver process to appear ...
	I0610 16:24:03.381293    8032 api_server.go:88] waiting for apiserver healthz status ...
	I0610 16:24:03.381310    8032 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0610 16:24:03.390978    8032 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0610 16:24:03.392211    8032 api_server.go:141] control plane version: v1.27.2
	I0610 16:24:03.392236    8032 api_server.go:131] duration metric: took 10.935316ms to wait for apiserver health ...
	I0610 16:24:03.392245    8032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 16:24:03.511648    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:03.551894    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:03.571762    8032 system_pods.go:59] 18 kube-system pods found
	I0610 16:24:03.571799    8032 system_pods.go:61] "coredns-5d78c9869d-9drks" [86dc19ce-f9c3-4571-8af9-5d6a9ca2d602] Running
	I0610 16:24:03.571805    8032 system_pods.go:61] "coredns-5d78c9869d-qmk44" [81849ba1-91d9-48ba-9045-1646978ec088] Running
	I0610 16:24:03.571814    8032 system_pods.go:61] "csi-hostpath-attacher-0" [043c7001-c22e-47ed-8180-4c2a033a6526] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 16:24:03.571823    8032 system_pods.go:61] "csi-hostpath-resizer-0" [da1c2621-994c-42dd-93a3-af8d97ee21c0] Running
	I0610 16:24:03.571832    8032 system_pods.go:61] "csi-hostpathplugin-kxn5d" [d1bde148-8bbc-48aa-b2ee-03a88494297a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 16:24:03.571841    8032 system_pods.go:61] "etcd-addons-048679" [70b797d3-7821-4616-a4e7-997e7ed87bfe] Running
	I0610 16:24:03.571847    8032 system_pods.go:61] "kindnet-n8d86" [0ea8d8ef-fe8c-4c6f-b9bc-a7de4eb3e723] Running
	I0610 16:24:03.571857    8032 system_pods.go:61] "kube-apiserver-addons-048679" [18e6ff16-77ef-45ef-92f7-6841e0a70e21] Running
	I0610 16:24:03.571863    8032 system_pods.go:61] "kube-controller-manager-addons-048679" [ccdf290f-e7aa-4a6c-9ebf-4ca64a9f5577] Running
	I0610 16:24:03.571875    8032 system_pods.go:61] "kube-ingress-dns-minikube" [e3ccac6f-f191-4c74-9e6b-08bde7ab3fc0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 16:24:03.571881    8032 system_pods.go:61] "kube-proxy-wpw2t" [9c1493b6-c816-42c0-8ece-dc52e4829672] Running
	I0610 16:24:03.571888    8032 system_pods.go:61] "kube-scheduler-addons-048679" [75ccec57-ce0d-4737-9b80-ca67a7536be7] Running
	I0610 16:24:03.571896    8032 system_pods.go:61] "metrics-server-844d8db974-rm647" [070b6f46-3316-4379-84dd-104bb4ee8773] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 16:24:03.571906    8032 system_pods.go:61] "registry-6j8b6" [38d3067f-658a-4d81-b7ec-82ba4bacfc27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 16:24:03.571912    8032 system_pods.go:61] "registry-proxy-ns9p4" [8fea9c13-b49b-4d01-b80f-f96d7210b726] Running
	I0610 16:24:03.571921    8032 system_pods.go:61] "snapshot-controller-75bbb956b9-7z67l" [f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 16:24:03.571932    8032 system_pods.go:61] "snapshot-controller-75bbb956b9-mlhhc" [6cf04cac-57fc-4374-92d2-d7fe940bcff1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 16:24:03.571941    8032 system_pods.go:61] "storage-provisioner" [5ec40f7f-ee02-4a78-b771-924c023896c3] Running
	I0610 16:24:03.571946    8032 system_pods.go:74] duration metric: took 179.696318ms to wait for pod list to return data ...
	I0610 16:24:03.571960    8032 default_sa.go:34] waiting for default service account to be created ...
	I0610 16:24:03.763649    8032 default_sa.go:45] found service account: "default"
	I0610 16:24:03.763720    8032 default_sa.go:55] duration metric: took 191.753078ms for default service account to be created ...
	I0610 16:24:03.763744    8032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 16:24:03.833410    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:03.845224    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:03.972821    8032 system_pods.go:86] 18 kube-system pods found
	I0610 16:24:03.972851    8032 system_pods.go:89] "coredns-5d78c9869d-9drks" [86dc19ce-f9c3-4571-8af9-5d6a9ca2d602] Running
	I0610 16:24:03.972858    8032 system_pods.go:89] "coredns-5d78c9869d-qmk44" [81849ba1-91d9-48ba-9045-1646978ec088] Running
	I0610 16:24:03.972866    8032 system_pods.go:89] "csi-hostpath-attacher-0" [043c7001-c22e-47ed-8180-4c2a033a6526] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 16:24:03.972874    8032 system_pods.go:89] "csi-hostpath-resizer-0" [da1c2621-994c-42dd-93a3-af8d97ee21c0] Running
	I0610 16:24:03.972882    8032 system_pods.go:89] "csi-hostpathplugin-kxn5d" [d1bde148-8bbc-48aa-b2ee-03a88494297a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 16:24:03.972894    8032 system_pods.go:89] "etcd-addons-048679" [70b797d3-7821-4616-a4e7-997e7ed87bfe] Running
	I0610 16:24:03.972903    8032 system_pods.go:89] "kindnet-n8d86" [0ea8d8ef-fe8c-4c6f-b9bc-a7de4eb3e723] Running
	I0610 16:24:03.972908    8032 system_pods.go:89] "kube-apiserver-addons-048679" [18e6ff16-77ef-45ef-92f7-6841e0a70e21] Running
	I0610 16:24:03.972916    8032 system_pods.go:89] "kube-controller-manager-addons-048679" [ccdf290f-e7aa-4a6c-9ebf-4ca64a9f5577] Running
	I0610 16:24:03.972925    8032 system_pods.go:89] "kube-ingress-dns-minikube" [e3ccac6f-f191-4c74-9e6b-08bde7ab3fc0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 16:24:03.972936    8032 system_pods.go:89] "kube-proxy-wpw2t" [9c1493b6-c816-42c0-8ece-dc52e4829672] Running
	I0610 16:24:03.972941    8032 system_pods.go:89] "kube-scheduler-addons-048679" [75ccec57-ce0d-4737-9b80-ca67a7536be7] Running
	I0610 16:24:03.972948    8032 system_pods.go:89] "metrics-server-844d8db974-rm647" [070b6f46-3316-4379-84dd-104bb4ee8773] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 16:24:03.972958    8032 system_pods.go:89] "registry-6j8b6" [38d3067f-658a-4d81-b7ec-82ba4bacfc27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 16:24:03.972966    8032 system_pods.go:89] "registry-proxy-ns9p4" [8fea9c13-b49b-4d01-b80f-f96d7210b726] Running
	I0610 16:24:03.972978    8032 system_pods.go:89] "snapshot-controller-75bbb956b9-7z67l" [f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 16:24:03.972991    8032 system_pods.go:89] "snapshot-controller-75bbb956b9-mlhhc" [6cf04cac-57fc-4374-92d2-d7fe940bcff1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 16:24:03.972997    8032 system_pods.go:89] "storage-provisioner" [5ec40f7f-ee02-4a78-b771-924c023896c3] Running
	I0610 16:24:03.973007    8032 system_pods.go:126] duration metric: took 209.246571ms to wait for k8s-apps to be running ...
	I0610 16:24:03.973015    8032 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 16:24:03.973070    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:24:03.992108    8032 system_svc.go:56] duration metric: took 19.085119ms WaitForService to wait for kubelet.
	I0610 16:24:03.992131    8032 kubeadm.go:581] duration metric: took 44.786452904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 16:24:03.992154    8032 node_conditions.go:102] verifying NodePressure condition ...
	I0610 16:24:04.012938    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:04.066935    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:04.164808    8032 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0610 16:24:04.164843    8032 node_conditions.go:123] node cpu capacity is 2
	I0610 16:24:04.164856    8032 node_conditions.go:105] duration metric: took 172.697796ms to run NodePressure ...
	I0610 16:24:04.164868    8032 start.go:228] waiting for startup goroutines ...
	I0610 16:24:04.333002    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:04.344168    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:04.513188    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:04.552495    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:04.833250    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:04.844644    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:05.012775    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:05.051839    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:05.335769    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:05.345638    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:05.512589    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:05.552216    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:05.833422    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:05.844522    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:06.012788    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:06.053554    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:06.334099    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:06.347601    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:06.512494    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:06.551796    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:06.832210    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:06.844275    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:07.012879    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:07.052712    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:07.333271    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:07.344171    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:07.512113    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:07.552877    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:07.832573    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:07.843257    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:08.012243    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:08.051190    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:08.332918    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:08.343933    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:08.512544    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:08.555943    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:08.833154    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:08.847036    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:09.012996    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:09.053307    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:09.333653    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:09.357936    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:09.529253    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:09.552881    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:09.833162    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:09.844464    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:10.016439    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:10.053426    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:10.338694    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:10.347348    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:10.513185    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:10.553369    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:10.833681    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:10.843934    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:11.012641    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:11.051394    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:11.339210    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:11.348627    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:11.512843    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:11.553149    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:11.833892    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:11.844588    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:12.012838    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:12.057580    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:12.332870    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:12.343574    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:12.512174    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:12.553287    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:12.833460    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:12.844990    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 16:24:13.012372    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:13.052319    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:13.333098    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:13.343893    8032 kapi.go:107] duration metric: took 47.515300509s to wait for kubernetes.io/minikube-addons=registry ...
	I0610 16:24:13.512791    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:13.551328    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:13.833380    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:14.013545    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:14.052305    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:14.333283    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:14.512299    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:14.552237    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:14.832673    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:15.012857    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:15.051709    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:15.334188    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:15.512218    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:15.551150    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:15.832124    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:16.012684    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:16.051920    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:16.332123    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:16.512182    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:16.551148    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:16.832609    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:17.012872    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:17.052303    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:17.333211    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:17.512421    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:17.551259    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:17.832876    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:18.012991    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:18.052128    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:18.333089    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:18.513046    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:18.551692    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:18.832540    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:19.014255    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:19.051767    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 16:24:19.333904    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:19.513291    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:19.552137    8032 kapi.go:107] duration metric: took 52.017289157s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 16:24:19.832708    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:20.012607    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:20.334011    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:20.511932    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:20.832265    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:21.012097    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:21.333147    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:21.512831    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:21.833192    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:22.012322    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:22.333250    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:22.512081    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:22.833008    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:23.012876    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:23.333715    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:23.512155    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:23.832977    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:24.012271    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:24.332543    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:24.513031    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:24.832467    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:25.012392    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:25.333226    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:25.512109    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:25.833001    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:26.011752    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:26.333488    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:26.512757    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:26.832633    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:27.012672    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:27.333433    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:27.512380    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:27.832429    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:28.012250    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:28.332547    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:28.512449    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:28.832934    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:29.011893    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:29.332898    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:29.513068    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:29.832764    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:30.012943    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:30.333128    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:30.511861    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:30.833477    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:31.012338    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:31.332699    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:31.512604    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:31.832252    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:32.012596    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:32.332158    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:32.511909    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:32.833813    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:33.013230    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:33.334219    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:33.511920    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:33.833249    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:34.012479    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:34.332399    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:34.512917    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:34.833074    8032 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 16:24:35.015352    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:35.333389    8032 kapi.go:107] duration metric: took 1m9.512447236s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0610 16:24:35.512444    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:36.014396    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:36.512290    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:37.012482    8032 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 16:24:37.512460    8032 kapi.go:107] duration metric: took 1m8.520637764s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 16:24:37.514361    8032 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-048679 cluster.
	I0610 16:24:37.516374    8032 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 16:24:37.518076    8032 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 16:24:37.519914    8032 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0610 16:24:37.521753    8032 addons.go:499] enable addons completed in 1m18.759433053s: enabled=[storage-provisioner ingress-dns default-storageclass cloud-spanner inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0610 16:24:37.521791    8032 start.go:233] waiting for cluster config update ...
	I0610 16:24:37.521808    8032 start.go:242] writing updated cluster config ...
	I0610 16:24:37.522092    8032 ssh_runner.go:195] Run: rm -f paused
	I0610 16:24:37.585240    8032 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0610 16:24:37.587202    8032 out.go:177] * Done! kubectl is now configured to use "addons-048679" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	78190f5dd4107       13753a81eccfd       5 seconds ago        Exited              hello-world-app           2                   917800d48b611       hello-world-app-65bdb79f98-jlp8k
	a8fccc5b3b63a       5ee47dcca7543       31 seconds ago       Running             nginx                     0                   3206397647d75       nginx
	2de0f6fef6a71       d23bd5d730ccb       58 seconds ago       Running             headlamp                  0                   556d1b677b219       headlamp-6b5756787-qxv8w
	8bbff99a61a92       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   8b73fe7765cc7       gcp-auth-58478865f7-ck2j5
	816e59a537cfe       8f2588812ab29       About a minute ago   Exited              patch                     2                   edbad46d34c2c       ingress-nginx-admission-patch-8qhhc
	9d5d668d6a412       8f2588812ab29       About a minute ago   Exited              create                    0                   0e2d33ff1045e       ingress-nginx-admission-create-jqp8d
	47c4df7fd8242       97e04611ad434       About a minute ago   Running             coredns                   0                   29cb20ea03efa       coredns-5d78c9869d-9drks
	424fff9ccd0a6       97e04611ad434       About a minute ago   Running             coredns                   0                   a127098c1b4e0       coredns-5d78c9869d-qmk44
	3f4a00d4ee721       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   a81a6cf53aacf       storage-provisioner
	fa13dcff6f361       29921a0845422       2 minutes ago        Running             kube-proxy                0                   d426c42d0a35e       kube-proxy-wpw2t
	7b2308effc996       b18bf71b941ba       2 minutes ago        Running             kindnet-cni               0                   647275e8fa4c9       kindnet-n8d86
	b962ecc414dcd       305d7ed1dae28       2 minutes ago        Running             kube-scheduler            0                   0061e8e6d6c45       kube-scheduler-addons-048679
	17ac4896fbfbc       2ee705380c3c5       2 minutes ago        Running             kube-controller-manager   0                   70c91552a0221       kube-controller-manager-addons-048679
	26e92134f64ba       72c9df6be7f1b       2 minutes ago        Running             kube-apiserver            0                   c457c39b6abdd       kube-apiserver-addons-048679
	a14a8b716671c       24bc64e911039       2 minutes ago        Running             etcd                      0                   d542988eb6cb2       etcd-addons-048679
	
	* 
	* ==> containerd <==
	* Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.344534352Z" level=info msg="Container to stop \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.355753111Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10104 runtime=io.containerd.runc.v2\n"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.360737582Z" level=info msg="StopContainer for \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.361418109Z" level=info msg="StopPodSandbox for \"4e395778b79882ffc28face46c214ce7fc0a3f5cd7183c317478570003b3e3c0\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.361606390Z" level=info msg="Container to stop \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.394928549Z" level=info msg="shim disconnected" id=8215313557dd9b066c3ecafc91261e0842d8219467aa186bb1c225d25172b21f
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.394985788Z" level=warning msg="cleaning up after shim disconnected" id=8215313557dd9b066c3ecafc91261e0842d8219467aa186bb1c225d25172b21f namespace=k8s.io
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.394996709Z" level=info msg="cleaning up dead shim"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.414695610Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10152 runtime=io.containerd.runc.v2\n"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.435685265Z" level=info msg="TearDown network for sandbox \"8215313557dd9b066c3ecafc91261e0842d8219467aa186bb1c225d25172b21f\" successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.435931293Z" level=info msg="StopPodSandbox for \"8215313557dd9b066c3ecafc91261e0842d8219467aa186bb1c225d25172b21f\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.437384195Z" level=info msg="shim disconnected" id=4e395778b79882ffc28face46c214ce7fc0a3f5cd7183c317478570003b3e3c0
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.438044275Z" level=warning msg="cleaning up after shim disconnected" id=4e395778b79882ffc28face46c214ce7fc0a3f5cd7183c317478570003b3e3c0 namespace=k8s.io
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.438200483Z" level=info msg="cleaning up dead shim"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.458537573Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10192 runtime=io.containerd.runc.v2\n"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.489646042Z" level=info msg="TearDown network for sandbox \"4e395778b79882ffc28face46c214ce7fc0a3f5cd7183c317478570003b3e3c0\" successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.489693410Z" level=info msg="StopPodSandbox for \"4e395778b79882ffc28face46c214ce7fc0a3f5cd7183c317478570003b3e3c0\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.656955417Z" level=info msg="RemoveContainer for \"f04e382cd00637501bf81ef2b362e89c9d6353f64092c95576a2a3ffb664d3ed\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.663413835Z" level=info msg="RemoveContainer for \"f04e382cd00637501bf81ef2b362e89c9d6353f64092c95576a2a3ffb664d3ed\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.676243394Z" level=info msg="RemoveContainer for \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.682349700Z" level=info msg="RemoveContainer for \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.683089533Z" level=error msg="ContainerStatus for \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\": not found"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.686547708Z" level=info msg="RemoveContainer for \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\""
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.696756097Z" level=info msg="RemoveContainer for \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\" returns successfully"
	Jun 10 16:25:42 addons-048679 containerd[743]: time="2023-06-10T16:25:42.698336086Z" level=error msg="ContainerStatus for \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\": not found"
	
	* 
	* ==> coredns [424fff9ccd0a68ed1c0ff48e2a9bd75543d41c5db62430b89c6b3c916adfb9ed] <==
	* [INFO] 10.244.0.17:60657 - 35810 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000224351s
	[INFO] 10.244.0.17:60657 - 26370 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00015236s
	[INFO] 10.244.0.17:60657 - 41772 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096557s
	[INFO] 10.244.0.17:60657 - 25222 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091601s
	[INFO] 10.244.0.17:60657 - 56718 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002895161s
	[INFO] 10.244.0.17:60657 - 58574 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004208462s
	[INFO] 10.244.0.17:60657 - 2363 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116742s
	[INFO] 10.244.0.17:36791 - 63381 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111326s
	[INFO] 10.244.0.17:36791 - 3997 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067503s
	[INFO] 10.244.0.17:36791 - 54162 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073451s
	[INFO] 10.244.0.17:36791 - 13628 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064057s
	[INFO] 10.244.0.17:36791 - 37859 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082608s
	[INFO] 10.244.0.17:36791 - 36123 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063655s
	[INFO] 10.244.0.17:36791 - 7031 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001229708s
	[INFO] 10.244.0.17:36791 - 57821 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00093472s
	[INFO] 10.244.0.17:36791 - 5214 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044356s
	[INFO] 10.244.0.17:44182 - 31811 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083387s
	[INFO] 10.244.0.17:44182 - 9675 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000094383s
	[INFO] 10.244.0.17:44182 - 27629 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082272s
	[INFO] 10.244.0.17:44182 - 59950 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008923s
	[INFO] 10.244.0.17:44182 - 58441 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000142514s
	[INFO] 10.244.0.17:44182 - 54063 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000094431s
	[INFO] 10.244.0.17:44182 - 25774 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001189462s
	[INFO] 10.244.0.17:44182 - 50047 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00090555s
	[INFO] 10.244.0.17:44182 - 10719 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059232s
	
	* 
	* ==> coredns [47c4df7fd8242a7432fbac5dbcdd03204a0592489b1afd3272017ae1e5a0768f] <==
	* [INFO] 10.244.0.2:33812 - 14925 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075716s
	[INFO] 10.244.0.2:33812 - 33615 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063983s
	[INFO] 10.244.0.2:47033 - 44791 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113115s
	[INFO] 10.244.0.2:47033 - 1995 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158275s
	[INFO] 10.244.0.2:38751 - 34442 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006761s
	[INFO] 10.244.0.2:38751 - 50056 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051151s
	[INFO] 10.244.0.2:55124 - 23955 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003889445s
	[INFO] 10.244.0.2:55124 - 61589 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003913864s
	[INFO] 10.244.0.2:41780 - 14344 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113213s
	[INFO] 10.244.0.2:41780 - 50231 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137311s
	[INFO] 10.244.0.18:50656 - 47128 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000149578s
	[INFO] 10.244.0.18:34916 - 53269 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113484s
	[INFO] 10.244.0.18:53252 - 44310 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150915s
	[INFO] 10.244.0.18:58137 - 43191 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002331057s
	[INFO] 10.244.0.18:48919 - 9136 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003743188s
	[INFO] 10.244.0.20:47689 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139306s
	[INFO] 10.244.0.17:55280 - 56902 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000193073s
	[INFO] 10.244.0.17:55280 - 11453 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117709s
	[INFO] 10.244.0.17:55280 - 62760 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093455s
	[INFO] 10.244.0.17:55280 - 7038 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008534s
	[INFO] 10.244.0.17:55280 - 29775 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074929s
	[INFO] 10.244.0.17:55280 - 6248 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097895s
	[INFO] 10.244.0.17:55280 - 49350 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001236395s
	[INFO] 10.244.0.17:55280 - 30699 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000893005s
	[INFO] 10.244.0.17:55280 - 46997 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000160549s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-048679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-048679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-048679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T16_23_06_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-048679
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:23:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-048679
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:25:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:25:39 +0000   Sat, 10 Jun 2023 16:22:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:25:39 +0000   Sat, 10 Jun 2023 16:22:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:25:39 +0000   Sat, 10 Jun 2023 16:22:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:25:39 +0000   Sat, 10 Jun 2023 16:23:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-048679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 49c46e69a31743c9ba81fbc352f8a9e9
	  System UUID:                d08ec044-f3ce-4072-8356-c71e8d0f2d6e
	  Boot ID:                    9a54dfd9-cc23-412f-8f4a-0089a0162bc0
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-jlp8k         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-58478865f7-ck2j5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  headlamp                    headlamp-6b5756787-qxv8w                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 coredns-5d78c9869d-9drks                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m28s
	  kube-system                 coredns-5d78c9869d-qmk44                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m28s
	  kube-system                 etcd-addons-048679                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kindnet-n8d86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m28s
	  kube-system                 kube-apiserver-addons-048679             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-controller-manager-addons-048679    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-proxy-wpw2t                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-addons-048679             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node addons-048679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node addons-048679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x7 over 2m51s)  kubelet          Node addons-048679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s                  kubelet          Node addons-048679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s                  kubelet          Node addons-048679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s                  kubelet          Node addons-048679 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m42s                  kubelet          Node addons-048679 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m31s                  kubelet          Node addons-048679 status is now: NodeReady
	  Normal  RegisteredNode           2m29s                  node-controller  Node addons-048679 event: Registered Node addons-048679 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun10 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015276] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.238178] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.921434] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [a14a8b716671c0cf7f155bb3b60affc10372ee50f689a12f0af6283835d55a0d] <==
	* {"level":"info","ts":"2023-06-10T16:22:57.508Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-10T16:22:57.508Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-10T16:22:57.509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-06-10T16:22:57.509Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:22:57.509Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:22:57.509Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:22:57.509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-06-10T16:22:57.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:57.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:57.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:57.767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:57.767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:57.767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:57.767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:57.770Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-048679 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:57.770Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:57.772Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:57.770Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:57.770Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:57.783Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:57.771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:57.786Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:57.773Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:57.786Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:57.787Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [8bbff99a61a9297be4fb23b3f8115fca99457df3ef373aa5a1b089e3dd484479] <==
	* 2023/06/10 16:24:36 GCP Auth Webhook started!
	2023/06/10 16:24:44 Ready to marshal response ...
	2023/06/10 16:24:44 Ready to write response ...
	2023/06/10 16:24:44 Ready to marshal response ...
	2023/06/10 16:24:44 Ready to write response ...
	2023/06/10 16:24:44 Ready to marshal response ...
	2023/06/10 16:24:44 Ready to write response ...
	2023/06/10 16:24:48 Ready to marshal response ...
	2023/06/10 16:24:48 Ready to write response ...
	2023/06/10 16:24:55 Ready to marshal response ...
	2023/06/10 16:24:55 Ready to write response ...
	2023/06/10 16:25:14 Ready to marshal response ...
	2023/06/10 16:25:14 Ready to write response ...
	2023/06/10 16:25:22 Ready to marshal response ...
	2023/06/10 16:25:22 Ready to write response ...
	2023/06/10 16:25:27 Ready to marshal response ...
	2023/06/10 16:25:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  16:25:47 up 8 min,  0 users,  load average: 0.95, 0.90, 0.41
	Linux addons-048679 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [7b2308effc996f7918eb1bacac106965b568d8542400813f746ab9985efcc393] <==
	* I0610 16:23:51.335573       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 16:23:51.351015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:23:51.351045       1 main.go:227] handling current node
	I0610 16:24:01.366219       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:01.366322       1 main.go:227] handling current node
	I0610 16:24:11.378693       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:11.378783       1 main.go:227] handling current node
	I0610 16:24:21.391224       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:21.391254       1 main.go:227] handling current node
	I0610 16:24:31.402861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:31.402889       1 main.go:227] handling current node
	I0610 16:24:41.407609       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:41.407637       1 main.go:227] handling current node
	I0610 16:24:51.418945       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:24:51.418974       1 main.go:227] handling current node
	I0610 16:25:01.431540       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:25:01.431569       1 main.go:227] handling current node
	I0610 16:25:11.444172       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:25:11.444201       1 main.go:227] handling current node
	I0610 16:25:21.449234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:25:21.449262       1 main.go:227] handling current node
	I0610 16:25:31.461251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:25:31.461279       1 main.go:227] handling current node
	I0610 16:25:41.469343       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:25:41.469371       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [26e92134f64baac2edfa186736b348155776b7f792517ef510e701e36f2a3ec9] <==
	* I0610 16:25:12.302253       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0610 16:25:14.062159       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0610 16:25:14.567979       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.102.231.89]
	I0610 16:25:22.399511       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.108.203.152]
	E0610 16:25:35.357472       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	E0610 16:25:39.182829       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0610 16:25:41.930708       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:41.930758       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:41.946799       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:41.946866       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:41.979766       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:41.979819       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:41.982127       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:41.982156       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:42.015769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:42.015941       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:42.031454       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:42.031534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:42.079285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:42.079510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 16:25:42.135341       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 16:25:42.135416       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0610 16:25:42.983443       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0610 16:25:43.136977       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0610 16:25:43.149107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [17ac4896fbfbc9414f748c108db0921d0a195edaf32710c87aacca00c350012e] <==
	* I0610 16:25:22.155175       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0610 16:25:22.187717       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-jlp8k"
	W0610 16:25:25.598797       1 endpointslice_controller.go:297] Error syncing endpoint slices for service "default/hello-world-app", retrying. Error: EndpointSlice informer cache is out of date
	I0610 16:25:26.820783       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	W0610 16:25:30.574578       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:30.574622       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 16:25:35.545312       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0610 16:25:35.651902       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0610 16:25:39.123772       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0610 16:25:39.137338       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	E0610 16:25:42.985919       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:43.139059       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:43.151060       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:44.051908       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:44.051996       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:44.408432       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:44.408467       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:44.605503       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:44.605537       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:46.627843       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:46.627885       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:46.719507       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:46.719542       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 16:25:47.650116       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 16:25:47.650154       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [fa13dcff6f361176828df556291c94ca6a3e8166003d04af55f2f8bba3c6c394] <==
	* I0610 16:23:21.274045       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0610 16:23:21.274174       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0610 16:23:21.274201       1 server_others.go:551] "Using iptables proxy"
	I0610 16:23:21.512970       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:23:21.513014       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0610 16:23:21.513023       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0610 16:23:21.513036       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0610 16:23:21.513097       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:23:21.513688       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:23:21.513700       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:23:21.516101       1 config.go:188] "Starting service config controller"
	I0610 16:23:21.516117       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:23:21.516169       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:23:21.516174       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:23:21.519229       1 config.go:315] "Starting node config controller"
	I0610 16:23:21.519240       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:23:21.617138       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:23:21.617201       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:23:21.619583       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b962ecc414dcd04f196f67b29a7134d9e0081535b2b64681ddc518412e4c783d] <==
	* W0610 16:23:02.422591       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:23:02.422608       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:23:02.422663       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:23:02.422682       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:23:02.422722       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:23:02.422742       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:23:02.422873       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 16:23:02.422978       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 16:23:03.224053       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:23:03.224090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:23:03.233742       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:23:03.234078       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:23:03.235395       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 16:23:03.235569       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 16:23:03.323566       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 16:23:03.323600       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 16:23:03.382177       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:23:03.382212       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:23:03.384035       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:23:03.384081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 16:23:03.410915       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 16:23:03.411126       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 16:23:03.472169       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:23:03.472391       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 16:23:03.912460       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 10 16:25:40 addons-048679 kubelet[1354]: I0610 16:25:40.645841    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bhj2z\" (UniqueName: \"kubernetes.io/projected/8712b487-15a6-4406-8d89-ac0ba709b1b2-kube-api-access-bhj2z\") on node \"addons-048679\" DevicePath \"\""
	Jun 10 16:25:40 addons-048679 kubelet[1354]: I0610 16:25:40.650648    1354 scope.go:115] "RemoveContainer" containerID="9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df"
	Jun 10 16:25:40 addons-048679 kubelet[1354]: E0610 16:25:40.651171    1354 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df\": not found" containerID="9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df"
	Jun 10 16:25:40 addons-048679 kubelet[1354]: I0610 16:25:40.651213    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df} err="failed to get container status \"9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a0268b0707f550743da7ed3a6204080a6083afe6ab930fa8d1a5423ef6983df\": not found"
	Jun 10 16:25:41 addons-048679 kubelet[1354]: I0610 16:25:41.712484    1354 scope.go:115] "RemoveContainer" containerID="f04e382cd00637501bf81ef2b362e89c9d6353f64092c95576a2a3ffb664d3ed"
	Jun 10 16:25:41 addons-048679 kubelet[1354]: I0610 16:25:41.725454    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8712b487-15a6-4406-8d89-ac0ba709b1b2 path="/var/lib/kubelet/pods/8712b487-15a6-4406-8d89-ac0ba709b1b2/volumes"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.570525    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97m7\" (UniqueName: \"kubernetes.io/projected/f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80-kube-api-access-t97m7\") pod \"f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80\" (UID: \"f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80\") "
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.570585    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62975\" (UniqueName: \"kubernetes.io/projected/6cf04cac-57fc-4374-92d2-d7fe940bcff1-kube-api-access-62975\") pod \"6cf04cac-57fc-4374-92d2-d7fe940bcff1\" (UID: \"6cf04cac-57fc-4374-92d2-d7fe940bcff1\") "
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.573000    1354 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf04cac-57fc-4374-92d2-d7fe940bcff1-kube-api-access-62975" (OuterVolumeSpecName: "kube-api-access-62975") pod "6cf04cac-57fc-4374-92d2-d7fe940bcff1" (UID: "6cf04cac-57fc-4374-92d2-d7fe940bcff1"). InnerVolumeSpecName "kube-api-access-62975". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.573279    1354 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80-kube-api-access-t97m7" (OuterVolumeSpecName: "kube-api-access-t97m7") pod "f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80" (UID: "f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80"). InnerVolumeSpecName "kube-api-access-t97m7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.645673    1354 scope.go:115] "RemoveContainer" containerID="f04e382cd00637501bf81ef2b362e89c9d6353f64092c95576a2a3ffb664d3ed"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.646453    1354 scope.go:115] "RemoveContainer" containerID="78190f5dd4107317780705d8229adf837634b103cb9177a89421984b19c21f68"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: E0610 16:25:42.646947    1354 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-jlp8k_default(517f231e-82dc-44fe-ac41-d344fb71e476)\"" pod="default/hello-world-app-65bdb79f98-jlp8k" podUID=517f231e-82dc-44fe-ac41-d344fb71e476
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.666984    1354 scope.go:115] "RemoveContainer" containerID="0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.675201    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t97m7\" (UniqueName: \"kubernetes.io/projected/f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80-kube-api-access-t97m7\") on node \"addons-048679\" DevicePath \"\""
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.675372    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-62975\" (UniqueName: \"kubernetes.io/projected/6cf04cac-57fc-4374-92d2-d7fe940bcff1-kube-api-access-62975\") on node \"addons-048679\" DevicePath \"\""
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.682784    1354 scope.go:115] "RemoveContainer" containerID="0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: E0610 16:25:42.683525    1354 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\": not found" containerID="0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.683900    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2} err="failed to get container status \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e3540705208f1a56b06f5fd5ffa8fcfbe717a4eaeecc8ec6a1d35f6b14c39d2\": not found"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.683922    1354 scope.go:115] "RemoveContainer" containerID="7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.697803    1354 scope.go:115] "RemoveContainer" containerID="7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: E0610 16:25:42.698738    1354 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\": not found" containerID="7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577"
	Jun 10 16:25:42 addons-048679 kubelet[1354]: I0610 16:25:42.698788    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577} err="failed to get container status \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c062084d6c817c4cddd173506ef0fb7ba63118326a301f6a045cc9b45160577\": not found"
	Jun 10 16:25:43 addons-048679 kubelet[1354]: I0610 16:25:43.713254    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6cf04cac-57fc-4374-92d2-d7fe940bcff1 path="/var/lib/kubelet/pods/6cf04cac-57fc-4374-92d2-d7fe940bcff1/volumes"
	Jun 10 16:25:43 addons-048679 kubelet[1354]: I0610 16:25:43.713731    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80 path="/var/lib/kubelet/pods/f5f6a8e6-f9b1-4fbb-b1e8-7392d3a4aa80/volumes"
	
	* 
	* ==> storage-provisioner [3f4a00d4ee721cb2adf5266d5934cfd30bcdd2d87047c83a28d3af25789488cf] <==
	* I0610 16:23:24.706632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:23:24.732890       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:23:24.732975       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:23:24.749932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:23:24.750180       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-048679_ea153c7c-17a5-4fd3-a328-bfa97ba2dc1a!
	I0610 16:23:24.751599       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c35d76d-ff19-4b84-95c4-1542c042eb1c", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-048679_ea153c7c-17a5-4fd3-a328-bfa97ba2dc1a became leader
	I0610 16:23:24.853093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-048679_ea153c7c-17a5-4fd3-a328-bfa97ba2dc1a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-048679 -n addons-048679
helpers_test.go:261: (dbg) Run:  kubectl --context addons-048679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 logs --file /tmp/TestFunctionalserialLogsFileCmd2033146483/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 logs --file /tmp/TestFunctionalserialLogsFileCmd2033146483/001/logs.txt: (2.786955882s)
functional_test.go:1250: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 16:30:03.304021   30781 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 b97a607f3528e8c3ad876f567c3d0e7ee4c500ce17cf8912ab570df2b749170e" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 b97a607f3528e8c3ad876f567c3d0e7ee4c500ce17cf8912ab570df2b749170e": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-06-10T16:30:03Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-351441_212b9a0ccaec740c3336a977fa840e2e/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-351441_212b9a0ccaec740c3336a977fa840e2e/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-06-10T16:30:03Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-351441_212b9a0ccaec740c3336a977fa840e2e/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-351441_212b9a0ccaec740c3336a977fa840e2e/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0610 16:30:03.623040   30781 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 9c25aa2a45f0f5f305c25e1b7c3ef15c8a0b67c83df3808c2f073c838d4bc63e" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 9c25aa2a45f0f5f305c25e1b7c3ef15c8a0b67c83df3808c2f073c838d4bc63e": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-06-10T16:30:03Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-functional-351441_c53971e1d3b622a1758cedfe42c7c122/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-functional-351441_c53971e1d3b622a1758cedfe42c7c122/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-06-10T16:30:03Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-functional-351441_c53971e1d3b622a1758cedfe42c7c122/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-functional-351441_c53971e1d3b622a1758cedfe42c7c122/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: kube-apiserver [b97a607f3528e8c3ad876f567c3d0e7ee4c500ce17cf8912ab570df2b749170e], kube-scheduler [9c25aa2a45f0f5f305c25e1b7c3ef15c8a0b67c83df3808c2f073c838d4bc63e]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr: (3.683921227s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-351441" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr: (3.279831688s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-351441" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.906433873s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-351441
functional_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 image load --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr: (3.486033203s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-351441" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image save gcr.io/google-containers/addon-resizer:functional-351441 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:384: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:409: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0610 16:30:21.903049   32824 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:30:21.903726   32824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:21.903761   32824 out.go:309] Setting ErrFile to fd 2...
	I0610 16:30:21.903782   32824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:21.903987   32824 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:30:21.904630   32824 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:30:21.904834   32824 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:30:21.905356   32824 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
	I0610 16:30:21.940595   32824 ssh_runner.go:195] Run: systemctl --version
	I0610 16:30:21.940705   32824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
	I0610 16:30:21.965343   32824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
	I0610 16:30:22.120879   32824 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0610 16:30:22.120959   32824 cache_images.go:254] Failed to load cached images for profile functional-351441. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0610 16:30:22.120980   32824 cache_images.go:262] succeeded pushing to: 
	I0610 16:30:22.120984   32824 cache_images.go:263] failed pushing to: functional-351441

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (58.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-879929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-879929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.286102711s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-879929 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-879929 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [17f12d2e-ab6c-4f48-952a-28ab9d55314e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [17f12d2e-ab6c-4f48-952a-28ab9d55314e] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.011955255s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-879929 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.022628787s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons disable ingress-dns --alsologtostderr -v=1: (6.294462025s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons disable ingress --alsologtostderr -v=1: (7.361072539s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-879929
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-879929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502",
	        "Created": "2023-06-10T16:31:19.831332613Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37149,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T16:31:20.162855567Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:22d1eb0c15f6653533a03d8e96fd97e1d685d349b3f4c622bea2e52531ef44b9",
	        "ResolvConfPath": "/var/lib/docker/containers/c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502/hostname",
	        "HostsPath": "/var/lib/docker/containers/c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502/hosts",
	        "LogPath": "/var/lib/docker/containers/c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502/c251744fa20a045bbec25c680837f60c6324dc825f2775c86b9918cb2d866502-json.log",
	        "Name": "/ingress-addon-legacy-879929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-879929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-879929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5cfee379972ba1f079e199b830ef76c76af10d15f0ecddbcd0615a3b34a66173-init/diff:/var/lib/docker/overlay2/74cb6f838e1fcfc1b6f19e3b70ff76db9bef2f6117698ff19da434ce3223b74a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cfee379972ba1f079e199b830ef76c76af10d15f0ecddbcd0615a3b34a66173/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cfee379972ba1f079e199b830ef76c76af10d15f0ecddbcd0615a3b34a66173/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cfee379972ba1f079e199b830ef76c76af10d15f0ecddbcd0615a3b34a66173/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-879929",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-879929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-879929",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-879929",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-879929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f7e91ba5ff20bd31532d145b772b01b017caa2d6cea42648bdc814f567e54ca",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1f7e91ba5ff2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-879929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c251744fa20a",
	                        "ingress-addon-legacy-879929"
	                    ],
	                    "NetworkID": "2af9fc2da0f6ae514ca4ad6b457dd1ae1e2d0df9dc83dcb9be0bcd049079abce",
	                    "EndpointID": "3d1f5bfd111e9746381b073ddd1bb1d4681745f1c293b939fa2f255a6d9f08c3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-879929 -n ingress-addon-legacy-879929
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-879929 logs -n 25: (1.404948496s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-351441                                                   | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-351441 ssh findmnt                                          | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-351441                                                   | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-351441 ssh findmnt                                          | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-351441 ssh findmnt                                          | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-351441 ssh findmnt                                          | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-351441                                                   | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-351441 ssh pgrep                                            | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-351441 image build -t                                       | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | localhost/my-image:functional-351441                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-351441 image ls                                             | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	| image          | functional-351441                                                      | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:30 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-351441                                                   | functional-351441           | jenkins | v1.30.1 | 10 Jun 23 16:30 UTC | 10 Jun 23 16:31 UTC |
	| start          | -p ingress-addon-legacy-879929                                         | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:31 UTC | 10 Jun 23 16:32 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-879929                                            | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:32 UTC | 10 Jun 23 16:32 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-879929                                            | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:32 UTC | 10 Jun 23 16:32 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-879929                                            | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:33 UTC | 10 Jun 23 16:33 UTC |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-879929 ip                                         | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:33 UTC | 10 Jun 23 16:33 UTC |
	| addons         | ingress-addon-legacy-879929                                            | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:33 UTC | 10 Jun 23 16:33 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-879929                                            | ingress-addon-legacy-879929 | jenkins | v1.30.1 | 10 Jun 23 16:33 UTC | 10 Jun 23 16:33 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 16:31:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 16:31:02.164964   36689 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:31:02.165551   36689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:31:02.165581   36689 out.go:309] Setting ErrFile to fd 2...
	I0610 16:31:02.165603   36689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:31:02.165809   36689 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:31:02.166574   36689 out.go:303] Setting JSON to false
	I0610 16:31:02.167628   36689 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":807,"bootTime":1686413856,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:31:02.167738   36689 start.go:137] virtualization:  
	I0610 16:31:02.171780   36689 out.go:177] * [ingress-addon-legacy-879929] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0610 16:31:02.173679   36689 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 16:31:02.173739   36689 notify.go:220] Checking for updates...
	I0610 16:31:02.176256   36689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:31:02.178325   36689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:31:02.180193   36689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:31:02.181952   36689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0610 16:31:02.183680   36689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 16:31:02.185813   36689 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 16:31:02.220482   36689 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:31:02.220585   36689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:31:02.311472   36689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-10 16:31:02.301826385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:31:02.311587   36689 docker.go:294] overlay module found
	I0610 16:31:02.313667   36689 out.go:177] * Using the docker driver based on user configuration
	I0610 16:31:02.315602   36689 start.go:297] selected driver: docker
	I0610 16:31:02.315621   36689 start.go:875] validating driver "docker" against <nil>
	I0610 16:31:02.315636   36689 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 16:31:02.316273   36689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:31:02.381093   36689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-10 16:31:02.371901186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:31:02.381263   36689 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 16:31:02.381540   36689 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 16:31:02.383299   36689 out.go:177] * Using Docker driver with root privileges
	I0610 16:31:02.385844   36689 cni.go:84] Creating CNI manager for ""
	I0610 16:31:02.385863   36689 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:31:02.385873   36689 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 16:31:02.385887   36689 start_flags.go:319] config:
	{Name:ingress-addon-legacy-879929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-879929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:31:02.388622   36689 out.go:177] * Starting control plane node ingress-addon-legacy-879929 in cluster ingress-addon-legacy-879929
	I0610 16:31:02.390372   36689 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0610 16:31:02.392281   36689 out.go:177] * Pulling base image ...
	I0610 16:31:02.394147   36689 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 16:31:02.394197   36689 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0610 16:31:02.413072   36689 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 16:31:02.413095   36689 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0610 16:31:02.467968   36689 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0610 16:31:02.467992   36689 cache.go:57] Caching tarball of preloaded images
	I0610 16:31:02.468552   36689 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0610 16:31:02.470526   36689 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0610 16:31:02.472155   36689 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:31:02.595376   36689 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0610 16:31:11.919662   36689 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:31:11.919795   36689 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:31:13.023582   36689 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0610 16:31:13.023963   36689 profile.go:148] Saving config to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/config.json ...
	I0610 16:31:13.024000   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/config.json: {Name:mk2774d5debfba675fe8b644f7e1b95c38095fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:13.024585   36689 cache.go:195] Successfully downloaded all kic artifacts
	I0610 16:31:13.024614   36689 start.go:364] acquiring machines lock for ingress-addon-legacy-879929: {Name:mk138c9b3ab9ef97b91575547920a1ecf160f54a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 16:31:13.025013   36689 start.go:368] acquired machines lock for "ingress-addon-legacy-879929" in 385.268µs
	I0610 16:31:13.025040   36689 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-879929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-879929 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0610 16:31:13.025126   36689 start.go:125] createHost starting for "" (driver="docker")
	I0610 16:31:13.027087   36689 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0610 16:31:13.027303   36689 start.go:159] libmachine.API.Create for "ingress-addon-legacy-879929" (driver="docker")
	I0610 16:31:13.027327   36689 client.go:168] LocalClient.Create starting
	I0610 16:31:13.027404   36689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem
	I0610 16:31:13.027443   36689 main.go:141] libmachine: Decoding PEM data...
	I0610 16:31:13.027459   36689 main.go:141] libmachine: Parsing certificate...
	I0610 16:31:13.027519   36689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem
	I0610 16:31:13.027543   36689 main.go:141] libmachine: Decoding PEM data...
	I0610 16:31:13.027557   36689 main.go:141] libmachine: Parsing certificate...
	I0610 16:31:13.027956   36689 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-879929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0610 16:31:13.048758   36689 cli_runner.go:211] docker network inspect ingress-addon-legacy-879929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0610 16:31:13.048835   36689 network_create.go:281] running [docker network inspect ingress-addon-legacy-879929] to gather additional debugging logs...
	I0610 16:31:13.048855   36689 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-879929
	W0610 16:31:13.066463   36689 cli_runner.go:211] docker network inspect ingress-addon-legacy-879929 returned with exit code 1
	I0610 16:31:13.066493   36689 network_create.go:284] error running [docker network inspect ingress-addon-legacy-879929]: docker network inspect ingress-addon-legacy-879929: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-879929 not found
	I0610 16:31:13.066509   36689 network_create.go:286] output of [docker network inspect ingress-addon-legacy-879929]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-879929 not found
	
	** /stderr **
	I0610 16:31:13.066568   36689 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 16:31:13.083953   36689 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000053180}
	I0610 16:31:13.083989   36689 network_create.go:123] attempt to create docker network ingress-addon-legacy-879929 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0610 16:31:13.084048   36689 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-879929 ingress-addon-legacy-879929
	I0610 16:31:13.161800   36689 network_create.go:107] docker network ingress-addon-legacy-879929 192.168.49.0/24 created
	I0610 16:31:13.161837   36689 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-879929" container
	I0610 16:31:13.161911   36689 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 16:31:13.179385   36689 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-879929 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-879929 --label created_by.minikube.sigs.k8s.io=true
	I0610 16:31:13.200781   36689 oci.go:103] Successfully created a docker volume ingress-addon-legacy-879929
	I0610 16:31:13.200878   36689 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-879929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-879929 --entrypoint /usr/bin/test -v ingress-addon-legacy-879929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 16:31:14.695505   36689 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-879929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-879929 --entrypoint /usr/bin/test -v ingress-addon-legacy-879929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib: (1.494584275s)
	I0610 16:31:14.695537   36689 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-879929
	I0610 16:31:14.695555   36689 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0610 16:31:14.695573   36689 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 16:31:14.695655   36689 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-879929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 16:31:19.739174   36689 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-879929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.043477611s)
	I0610 16:31:19.739202   36689 kic.go:199] duration metric: took 5.043626 seconds to extract preloaded images to volume
	W0610 16:31:19.739346   36689 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 16:31:19.739458   36689 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 16:31:19.813455   36689 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-879929 --name ingress-addon-legacy-879929 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-879929 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-879929 --network ingress-addon-legacy-879929 --ip 192.168.49.2 --volume ingress-addon-legacy-879929:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 16:31:20.171314   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Running}}
	I0610 16:31:20.196719   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:31:20.229240   36689 cli_runner.go:164] Run: docker exec ingress-addon-legacy-879929 stat /var/lib/dpkg/alternatives/iptables
	I0610 16:31:20.311447   36689 oci.go:144] the created container "ingress-addon-legacy-879929" has a running status.
	I0610 16:31:20.311475   36689 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa...
	I0610 16:31:21.444170   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0610 16:31:21.444234   36689 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 16:31:21.469887   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:31:21.491736   36689 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 16:31:21.491761   36689 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-879929 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 16:31:21.565680   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:31:21.592215   36689 machine.go:88] provisioning docker machine ...
	I0610 16:31:21.592245   36689 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-879929"
	I0610 16:31:21.592317   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:21.609769   36689 main.go:141] libmachine: Using SSH client type: native
	I0610 16:31:21.610250   36689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0610 16:31:21.610268   36689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-879929 && echo "ingress-addon-legacy-879929" | sudo tee /etc/hostname
	I0610 16:31:21.769906   36689 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-879929
	
	I0610 16:31:21.770020   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:21.789195   36689 main.go:141] libmachine: Using SSH client type: native
	I0610 16:31:21.789658   36689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0610 16:31:21.789683   36689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-879929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-879929/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-879929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 16:31:21.932334   36689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 16:31:21.932365   36689 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16578-2220/.minikube CaCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16578-2220/.minikube}
	I0610 16:31:21.932387   36689 ubuntu.go:177] setting up certificates
	I0610 16:31:21.932396   36689 provision.go:83] configureAuth start
	I0610 16:31:21.932468   36689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-879929
	I0610 16:31:21.951636   36689 provision.go:138] copyHostCerts
	I0610 16:31:21.951680   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16578-2220/.minikube/cert.pem
	I0610 16:31:21.951711   36689 exec_runner.go:144] found /home/jenkins/minikube-integration/16578-2220/.minikube/cert.pem, removing ...
	I0610 16:31:21.951722   36689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16578-2220/.minikube/cert.pem
	I0610 16:31:21.951807   36689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/cert.pem (1123 bytes)
	I0610 16:31:21.951889   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16578-2220/.minikube/key.pem
	I0610 16:31:21.951916   36689 exec_runner.go:144] found /home/jenkins/minikube-integration/16578-2220/.minikube/key.pem, removing ...
	I0610 16:31:21.951923   36689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16578-2220/.minikube/key.pem
	I0610 16:31:21.951953   36689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/key.pem (1675 bytes)
	I0610 16:31:21.952000   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16578-2220/.minikube/ca.pem
	I0610 16:31:21.952022   36689 exec_runner.go:144] found /home/jenkins/minikube-integration/16578-2220/.minikube/ca.pem, removing ...
	I0610 16:31:21.952030   36689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.pem
	I0610 16:31:21.952055   36689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16578-2220/.minikube/ca.pem (1078 bytes)
	I0610 16:31:21.952106   36689 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-879929 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-879929]
	I0610 16:31:22.210580   36689 provision.go:172] copyRemoteCerts
	I0610 16:31:22.210670   36689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 16:31:22.210714   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:22.228551   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:31:22.328976   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 16:31:22.329036   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 16:31:22.358151   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 16:31:22.358248   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0610 16:31:22.387494   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 16:31:22.387590   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 16:31:22.417143   36689 provision.go:86] duration metric: configureAuth took 484.727909ms
	I0610 16:31:22.417174   36689 ubuntu.go:193] setting minikube options for container-runtime
	I0610 16:31:22.417379   36689 config.go:182] Loaded profile config "ingress-addon-legacy-879929": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0610 16:31:22.417394   36689 machine.go:91] provisioned docker machine in 825.160318ms
	I0610 16:31:22.417400   36689 client.go:171] LocalClient.Create took 9.390067116s
	I0610 16:31:22.417426   36689 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-879929" took 9.390120794s
	I0610 16:31:22.417437   36689 start.go:300] post-start starting for "ingress-addon-legacy-879929" (driver="docker")
	I0610 16:31:22.417444   36689 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 16:31:22.417509   36689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 16:31:22.417561   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:22.436287   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:31:22.542495   36689 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 16:31:22.546690   36689 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 16:31:22.546731   36689 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 16:31:22.546744   36689 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 16:31:22.546752   36689 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 16:31:22.546761   36689 filesync.go:126] Scanning /home/jenkins/minikube-integration/16578-2220/.minikube/addons for local assets ...
	I0610 16:31:22.546830   36689 filesync.go:126] Scanning /home/jenkins/minikube-integration/16578-2220/.minikube/files for local assets ...
	I0610 16:31:22.546917   36689 filesync.go:149] local asset: /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem -> 75262.pem in /etc/ssl/certs
	I0610 16:31:22.546929   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem -> /etc/ssl/certs/75262.pem
	I0610 16:31:22.547046   36689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 16:31:22.557587   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem --> /etc/ssl/certs/75262.pem (1708 bytes)
	I0610 16:31:22.587408   36689 start.go:303] post-start completed in 169.95621ms
	I0610 16:31:22.587850   36689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-879929
	I0610 16:31:22.605576   36689 profile.go:148] Saving config to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/config.json ...
	I0610 16:31:22.605894   36689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 16:31:22.605952   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:22.623261   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:31:22.724525   36689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 16:31:22.730274   36689 start.go:128] duration metric: createHost completed in 9.705134304s
	I0610 16:31:22.730297   36689 start.go:83] releasing machines lock for "ingress-addon-legacy-879929", held for 9.705271328s
	I0610 16:31:22.730366   36689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-879929
	I0610 16:31:22.747782   36689 ssh_runner.go:195] Run: cat /version.json
	I0610 16:31:22.747838   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:22.747882   36689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 16:31:22.747938   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:31:22.779811   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:31:22.781669   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:31:23.019094   36689 ssh_runner.go:195] Run: systemctl --version
	I0610 16:31:23.024943   36689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 16:31:23.030892   36689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0610 16:31:23.060846   36689 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0610 16:31:23.060991   36689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 16:31:23.095706   36689 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 16:31:23.095768   36689 start.go:481] detecting cgroup driver to use...
	I0610 16:31:23.095812   36689 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 16:31:23.095891   36689 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 16:31:23.110311   36689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 16:31:23.123825   36689 docker.go:193] disabling cri-docker service (if available) ...
	I0610 16:31:23.123892   36689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 16:31:23.139716   36689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 16:31:23.156376   36689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 16:31:23.272351   36689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 16:31:23.386179   36689 docker.go:209] disabling docker service ...
	I0610 16:31:23.386252   36689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 16:31:23.409514   36689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 16:31:23.424729   36689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 16:31:23.528808   36689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 16:31:23.628223   36689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 16:31:23.642383   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 16:31:23.662712   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0610 16:31:23.675002   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 16:31:23.687522   36689 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 16:31:23.687603   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 16:31:23.700013   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 16:31:23.712006   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 16:31:23.724012   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 16:31:23.736141   36689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 16:31:23.747814   36689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 16:31:23.764678   36689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 16:31:23.775609   36689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 16:31:23.786140   36689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 16:31:23.883442   36689 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 16:31:23.979276   36689 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0610 16:31:23.979389   36689 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0610 16:31:23.985123   36689 start.go:549] Will wait 60s for crictl version
	I0610 16:31:23.985229   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:23.989975   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 16:31:24.049459   36689 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0610 16:31:24.049601   36689 ssh_runner.go:195] Run: containerd --version
	I0610 16:31:24.079596   36689 ssh_runner.go:195] Run: containerd --version
	I0610 16:31:24.112028   36689 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.21 ...
	I0610 16:31:24.113791   36689 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-879929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 16:31:24.130859   36689 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0610 16:31:24.135502   36689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 16:31:24.149398   36689 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0610 16:31:24.149462   36689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 16:31:24.193582   36689 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0610 16:31:24.193663   36689 ssh_runner.go:195] Run: which lz4
	I0610 16:31:24.198223   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0610 16:31:24.198321   36689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 16:31:24.202730   36689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 16:31:24.202765   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0610 16:31:26.397466   36689 containerd.go:547] Took 2.199182 seconds to copy over tarball
	I0610 16:31:26.397529   36689 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 16:31:29.074688   36689 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677132649s)
	I0610 16:31:29.074712   36689 containerd.go:554] Took 2.677224 seconds to extract the tarball
	I0610 16:31:29.074731   36689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 16:31:29.214353   36689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 16:31:29.317833   36689 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 16:31:29.406567   36689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 16:31:29.457803   36689 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 16:31:29.457910   36689 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:31:29.458083   36689 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 16:31:29.458189   36689 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 16:31:29.458273   36689 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 16:31:29.458337   36689 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 16:31:29.458413   36689 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 16:31:29.458505   36689 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0610 16:31:29.458571   36689 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0610 16:31:29.459459   36689 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 16:31:29.459859   36689 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 16:31:29.460058   36689 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:31:29.460300   36689 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0610 16:31:29.460453   36689 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0610 16:31:29.460591   36689 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 16:31:29.460719   36689 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 16:31:29.460949   36689 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 16:31:29.926243   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0610 16:31:29.939717   36689 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.939949   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W0610 16:31:29.940144   36689 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.940263   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0610 16:31:29.945401   36689 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.945622   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W0610 16:31:29.948409   36689 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.948591   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W0610 16:31:29.949611   36689 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.949796   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0610 16:31:29.985334   36689 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:29.985527   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0610 16:31:30.125511   36689 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 16:31:30.125640   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 16:31:30.249364   36689 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0610 16:31:30.249405   36689 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0610 16:31:30.249454   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.690202   36689 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0610 16:31:30.690281   36689 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 16:31:30.690378   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.690507   36689 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0610 16:31:30.690545   36689 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0610 16:31:30.690584   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.690685   36689 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0610 16:31:30.690722   36689 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0610 16:31:30.690765   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.690847   36689 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0610 16:31:30.690886   36689 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 16:31:30.690930   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.711225   36689 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0610 16:31:30.711308   36689 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 16:31:30.711387   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.773121   36689 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0610 16:31:30.773200   36689 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 16:31:30.773275   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.796067   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0610 16:31:30.796210   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0610 16:31:30.796283   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0610 16:31:30.796326   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0610 16:31:30.796365   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0610 16:31:30.796407   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 16:31:30.796450   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0610 16:31:30.796733   36689 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 16:31:30.796768   36689 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:31:30.796809   36689 ssh_runner.go:195] Run: which crictl
	I0610 16:31:30.990661   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0610 16:31:30.990727   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0610 16:31:30.990769   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0610 16:31:30.990804   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0610 16:31:30.990858   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0610 16:31:30.990898   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0610 16:31:30.990936   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0610 16:31:30.990992   36689 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:31:31.049659   36689 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 16:31:31.049734   36689 cache_images.go:92] LoadImages completed in 1.59190624s
	W0610 16:31:31.049810   36689 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16578-2220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0610 16:31:31.049870   36689 ssh_runner.go:195] Run: sudo crictl info
	I0610 16:31:31.096599   36689 cni.go:84] Creating CNI manager for ""
	I0610 16:31:31.096622   36689 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:31:31.096637   36689 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 16:31:31.096655   36689 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-879929 NodeName:ingress-addon-legacy-879929 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 16:31:31.096789   36689 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-879929"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 16:31:31.096867   36689 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-879929 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-879929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 16:31:31.096932   36689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0610 16:31:31.107540   36689 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 16:31:31.107606   36689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 16:31:31.117808   36689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0610 16:31:31.138532   36689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0610 16:31:31.159189   36689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0610 16:31:31.179840   36689 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0610 16:31:31.184095   36689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 16:31:31.197346   36689 certs.go:56] Setting up /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929 for IP: 192.168.49.2
	I0610 16:31:31.197374   36689 certs.go:190] acquiring lock for shared ca certs: {Name:mke388f9dea4ce5085a6492ed88d04b6a5be93b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:31.197508   36689 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key
	I0610 16:31:31.197555   36689 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key
	I0610 16:31:31.197602   36689 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key
	I0610 16:31:31.197616   36689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt with IP's: []
	I0610 16:31:31.858944   36689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt ...
	I0610 16:31:31.858977   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: {Name:mk73e03aa116ceaa4634f84dce0d68050cf48da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:31.859170   36689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key ...
	I0610 16:31:31.859183   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key: {Name:mkf0072f7b358455521340d00d81492a12449503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:31.859676   36689 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key.dd3b5fb2
	I0610 16:31:31.859697   36689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 16:31:32.249527   36689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt.dd3b5fb2 ...
	I0610 16:31:32.249564   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt.dd3b5fb2: {Name:mke88c1a88c59d1535ce014fcb94b102541b3632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:32.250174   36689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key.dd3b5fb2 ...
	I0610 16:31:32.250192   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key.dd3b5fb2: {Name:mk4e27fa2c9e44ced1237254d26da3eb66b70866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:32.250636   36689 certs.go:337] copying /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt
	I0610 16:31:32.250721   36689 certs.go:341] copying /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key
	I0610 16:31:32.250779   36689 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.key
	I0610 16:31:32.250795   36689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.crt with IP's: []
	I0610 16:31:32.763965   36689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.crt ...
	I0610 16:31:32.764000   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.crt: {Name:mk97577a214eb7ea8ededc8ddd5e8b8ea043f31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:32.764660   36689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.key ...
	I0610 16:31:32.764683   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.key: {Name:mk07725823edcf37a2c167d8476963432cf0882f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:31:32.764779   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 16:31:32.764798   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 16:31:32.764810   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 16:31:32.764827   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 16:31:32.764844   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 16:31:32.764859   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 16:31:32.764876   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 16:31:32.764891   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 16:31:32.764950   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/7526.pem (1338 bytes)
	W0610 16:31:32.764993   36689 certs.go:433] ignoring /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/7526_empty.pem, impossibly tiny 0 bytes
	I0610 16:31:32.765006   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 16:31:32.765036   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/ca.pem (1078 bytes)
	I0610 16:31:32.765065   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/cert.pem (1123 bytes)
	I0610 16:31:32.765094   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/home/jenkins/minikube-integration/16578-2220/.minikube/certs/key.pem (1675 bytes)
	I0610 16:31:32.765143   36689 certs.go:437] found cert: /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem (1708 bytes)
	I0610 16:31:32.765174   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem -> /usr/share/ca-certificates/75262.pem
	I0610 16:31:32.765192   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:31:32.765209   36689 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16578-2220/.minikube/certs/7526.pem -> /usr/share/ca-certificates/7526.pem
	I0610 16:31:32.765741   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 16:31:32.794286   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 16:31:32.822560   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 16:31:32.850637   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 16:31:32.878951   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 16:31:32.906829   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 16:31:32.934957   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 16:31:32.963294   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 16:31:32.992251   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/ssl/certs/75262.pem --> /usr/share/ca-certificates/75262.pem (1708 bytes)
	I0610 16:31:33.023134   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 16:31:33.052332   36689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16578-2220/.minikube/certs/7526.pem --> /usr/share/ca-certificates/7526.pem (1338 bytes)
	I0610 16:31:33.082309   36689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 16:31:33.104158   36689 ssh_runner.go:195] Run: openssl version
	I0610 16:31:33.111572   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7526.pem && ln -fs /usr/share/ca-certificates/7526.pem /etc/ssl/certs/7526.pem"
	I0610 16:31:33.123634   36689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7526.pem
	I0610 16:31:33.128459   36689 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/7526.pem
	I0610 16:31:33.128561   36689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7526.pem
	I0610 16:31:33.137898   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7526.pem /etc/ssl/certs/51391683.0"
	I0610 16:31:33.150327   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75262.pem && ln -fs /usr/share/ca-certificates/75262.pem /etc/ssl/certs/75262.pem"
	I0610 16:31:33.162203   36689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75262.pem
	I0610 16:31:33.167225   36689 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/75262.pem
	I0610 16:31:33.167330   36689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75262.pem
	I0610 16:31:33.176153   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75262.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 16:31:33.187771   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 16:31:33.199835   36689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:31:33.204701   36689 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:31:33.204788   36689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 16:31:33.213289   36689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 16:31:33.225049   36689 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 16:31:33.229453   36689 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 16:31:33.229501   36689 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-879929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-879929 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:31:33.229582   36689 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0610 16:31:33.229636   36689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 16:31:33.271972   36689 cri.go:88] found id: ""
	I0610 16:31:33.272082   36689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 16:31:33.282951   36689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 16:31:33.293660   36689 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0610 16:31:33.293728   36689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 16:31:33.304765   36689 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 16:31:33.304828   36689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0610 16:31:33.359714   36689 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0610 16:31:33.362675   36689 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 16:31:33.417156   36689 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0610 16:31:33.417290   36689 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0610 16:31:33.417375   36689 kubeadm.go:322] OS: Linux
	I0610 16:31:33.417457   36689 kubeadm.go:322] CGROUPS_CPU: enabled
	I0610 16:31:33.417539   36689 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0610 16:31:33.417617   36689 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0610 16:31:33.417699   36689 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0610 16:31:33.417775   36689 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0610 16:31:33.417864   36689 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0610 16:31:33.507877   36689 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 16:31:33.507985   36689 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 16:31:33.508078   36689 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 16:31:33.757738   36689 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 16:31:33.759209   36689 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 16:31:33.759467   36689 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 16:31:33.878835   36689 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 16:31:33.881662   36689 out.go:204]   - Generating certificates and keys ...
	I0610 16:31:33.881798   36689 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 16:31:33.881897   36689 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 16:31:34.271867   36689 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 16:31:34.689271   36689 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 16:31:35.013750   36689 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 16:31:35.409219   36689 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 16:31:35.665404   36689 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 16:31:35.665775   36689 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-879929 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 16:31:36.855880   36689 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 16:31:36.856699   36689 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-879929 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 16:31:37.514890   36689 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 16:31:37.869521   36689 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 16:31:38.669024   36689 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 16:31:38.669377   36689 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 16:31:38.995093   36689 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 16:31:39.191461   36689 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 16:31:39.573174   36689 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 16:31:40.383926   36689 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 16:31:40.384550   36689 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 16:31:40.386829   36689 out.go:204]   - Booting up control plane ...
	I0610 16:31:40.386943   36689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 16:31:40.393245   36689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 16:31:40.400615   36689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 16:31:40.400709   36689 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 16:31:40.403178   36689 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 16:31:52.905919   36689 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502707 seconds
	I0610 16:31:52.906032   36689 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 16:31:52.920010   36689 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 16:31:53.442226   36689 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 16:31:53.442471   36689 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-879929 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0610 16:31:53.950763   36689 kubeadm.go:322] [bootstrap-token] Using token: lde6c8.gbi8gbob0toi91as
	I0610 16:31:53.952440   36689 out.go:204]   - Configuring RBAC rules ...
	I0610 16:31:53.952564   36689 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 16:31:53.958671   36689 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 16:31:53.969959   36689 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 16:31:53.973645   36689 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 16:31:53.977860   36689 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 16:31:53.981126   36689 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 16:31:53.991201   36689 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 16:31:54.254110   36689 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 16:31:54.372900   36689 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 16:31:54.374861   36689 kubeadm.go:322] 
	I0610 16:31:54.374934   36689 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 16:31:54.374944   36689 kubeadm.go:322] 
	I0610 16:31:54.375017   36689 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 16:31:54.375041   36689 kubeadm.go:322] 
	I0610 16:31:54.375095   36689 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 16:31:54.375154   36689 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 16:31:54.375205   36689 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 16:31:54.375213   36689 kubeadm.go:322] 
	I0610 16:31:54.375262   36689 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 16:31:54.375336   36689 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 16:31:54.375404   36689 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 16:31:54.375413   36689 kubeadm.go:322] 
	I0610 16:31:54.375493   36689 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 16:31:54.375569   36689 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 16:31:54.375577   36689 kubeadm.go:322] 
	I0610 16:31:54.375667   36689 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lde6c8.gbi8gbob0toi91as \
	I0610 16:31:54.375771   36689 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5bc14f008eafd77d085ab1d9d6f7c71ae8f6d38083eb171c3f7e9c167a550f4a \
	I0610 16:31:54.375796   36689 kubeadm.go:322]     --control-plane 
	I0610 16:31:54.375804   36689 kubeadm.go:322] 
	I0610 16:31:54.375885   36689 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 16:31:54.375894   36689 kubeadm.go:322] 
	I0610 16:31:54.375971   36689 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lde6c8.gbi8gbob0toi91as \
	I0610 16:31:54.376078   36689 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5bc14f008eafd77d085ab1d9d6f7c71ae8f6d38083eb171c3f7e9c167a550f4a 
	I0610 16:31:54.383133   36689 kubeadm.go:322] W0610 16:31:33.359016    1107 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0610 16:31:54.383350   36689 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0610 16:31:54.383454   36689 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 16:31:54.383578   36689 kubeadm.go:322] W0610 16:31:40.393073    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 16:31:54.383702   36689 kubeadm.go:322] W0610 16:31:40.395402    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 16:31:54.383721   36689 cni.go:84] Creating CNI manager for ""
	I0610 16:31:54.383729   36689 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:31:54.385539   36689 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 16:31:54.387341   36689 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 16:31:54.392920   36689 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0610 16:31:54.392938   36689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 16:31:54.418252   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 16:31:54.845641   36689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 16:31:54.845777   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=ingress-addon-legacy-879929 minikube.k8s.io/updated_at=2023_06_10T16_31_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:54.845789   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:55.004119   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:55.004194   36689 ops.go:34] apiserver oom_adj: -16
	I0610 16:31:55.602421   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:56.102361   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:56.602729   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:57.102724   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:57.601964   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:58.102596   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:58.602500   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:59.102593   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:31:59.602687   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:00.102603   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:00.602310   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:01.102400   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:01.602666   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:02.102169   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:02.601947   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:03.102894   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:03.602339   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:04.102712   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:04.602701   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:05.102268   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:05.602300   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:06.102460   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:06.602722   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:07.101876   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:07.601883   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:08.102506   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:08.602588   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:09.102004   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:09.602582   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:10.102118   36689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 16:32:10.262475   36689 kubeadm.go:1076] duration metric: took 15.416751645s to wait for elevateKubeSystemPrivileges.
	I0610 16:32:10.262521   36689 kubeadm.go:406] StartCluster complete in 37.03300562s
	I0610 16:32:10.262544   36689 settings.go:142] acquiring lock: {Name:mka1eca2c16888376cc44d7f55f3d7e369175085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:32:10.262633   36689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:32:10.263332   36689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16578-2220/kubeconfig: {Name:mk9761da47d382771738f32de309583d22d7ff06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 16:32:10.264056   36689 kapi.go:59] client config for ingress-addon-legacy-879929: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt", KeyFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key", CAFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dfeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 16:32:10.265411   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 16:32:10.265692   36689 config.go:182] Loaded profile config "ingress-addon-legacy-879929": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0610 16:32:10.265723   36689 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 16:32:10.265778   36689 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-879929"
	I0610 16:32:10.265791   36689 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-879929"
	I0610 16:32:10.265839   36689 host.go:66] Checking if "ingress-addon-legacy-879929" exists ...
	I0610 16:32:10.266267   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:32:10.266940   36689 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 16:32:10.266974   36689 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-879929"
	I0610 16:32:10.266988   36689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-879929"
	I0610 16:32:10.267274   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:32:10.306473   36689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 16:32:10.310609   36689 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 16:32:10.310632   36689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 16:32:10.310697   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:32:10.326687   36689 kapi.go:59] client config for ingress-addon-legacy-879929: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt", KeyFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key", CAFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dfeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 16:32:10.359965   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:32:10.381606   36689 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-879929"
	I0610 16:32:10.381645   36689 host.go:66] Checking if "ingress-addon-legacy-879929" exists ...
	I0610 16:32:10.382087   36689 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-879929 --format={{.State.Status}}
	I0610 16:32:10.408355   36689 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 16:32:10.408376   36689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 16:32:10.408438   36689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-879929
	I0610 16:32:10.438548   36689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/ingress-addon-legacy-879929/id_rsa Username:docker}
	I0610 16:32:10.528300   36689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 16:32:10.627738   36689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 16:32:10.740004   36689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 16:32:10.860172   36689 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-879929" context rescaled to 1 replicas
	I0610 16:32:10.860221   36689 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0610 16:32:10.868178   36689 out.go:177] * Verifying Kubernetes components...
	I0610 16:32:10.871119   36689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:32:11.088915   36689 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0610 16:32:11.368396   36689 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 16:32:11.370509   36689 addons.go:499] enable addons completed in 1.104777748s: enabled=[default-storageclass storage-provisioner]
	I0610 16:32:11.367541   36689 kapi.go:59] client config for ingress-addon-legacy-879929: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt", KeyFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.key", CAFile:"/home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dfeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 16:32:11.370789   36689 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-879929" to be "Ready" ...
	I0610 16:32:11.378734   36689 node_ready.go:49] node "ingress-addon-legacy-879929" has status "Ready":"True"
	I0610 16:32:11.378758   36689 node_ready.go:38] duration metric: took 7.950515ms waiting for node "ingress-addon-legacy-879929" to be "Ready" ...
	I0610 16:32:11.378768   36689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 16:32:11.390223   36689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-njqvp" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:13.412250   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:15.912436   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:18.411543   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:20.411911   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:22.412161   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:24.911827   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:27.412647   36689 pod_ready.go:102] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"False"
	I0610 16:32:27.911257   36689 pod_ready.go:92] pod "coredns-66bff467f8-njqvp" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:27.911282   36689 pod_ready.go:81] duration metric: took 16.521032849s waiting for pod "coredns-66bff467f8-njqvp" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.911294   36689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-z84wh" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.913183   36689 pod_ready.go:97] error getting pod "coredns-66bff467f8-z84wh" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-z84wh" not found
	I0610 16:32:27.913206   36689 pod_ready.go:81] duration metric: took 1.904392ms waiting for pod "coredns-66bff467f8-z84wh" in "kube-system" namespace to be "Ready" ...
	E0610 16:32:27.913223   36689 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-z84wh" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-z84wh" not found
	I0610 16:32:27.913230   36689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.918052   36689 pod_ready.go:92] pod "etcd-ingress-addon-legacy-879929" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:27.918077   36689 pod_ready.go:81] duration metric: took 4.838675ms waiting for pod "etcd-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.918094   36689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.923233   36689 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-879929" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:27.923257   36689 pod_ready.go:81] duration metric: took 5.155636ms waiting for pod "kube-apiserver-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.923302   36689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.928451   36689 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-879929" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:27.928476   36689 pod_ready.go:81] duration metric: took 5.158836ms waiting for pod "kube-controller-manager-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:27.928488   36689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k29x9" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:28.106185   36689 request.go:628] Waited for 175.131626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-879929
	I0610 16:32:28.109248   36689 pod_ready.go:92] pod "kube-proxy-k29x9" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:28.109275   36689 pod_ready.go:81] duration metric: took 180.779155ms waiting for pod "kube-proxy-k29x9" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:28.109287   36689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:28.306678   36689 request.go:628] Waited for 197.308739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-879929
	I0610 16:32:28.506636   36689 request.go:628] Waited for 197.310347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-879929
	I0610 16:32:28.509381   36689 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-879929" in "kube-system" namespace has status "Ready":"True"
	I0610 16:32:28.509405   36689 pod_ready.go:81] duration metric: took 400.106444ms waiting for pod "kube-scheduler-ingress-addon-legacy-879929" in "kube-system" namespace to be "Ready" ...
	I0610 16:32:28.509418   36689 pod_ready.go:38] duration metric: took 17.130636562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 16:32:28.509458   36689 api_server.go:52] waiting for apiserver process to appear ...
	I0610 16:32:28.509539   36689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 16:32:28.522860   36689 api_server.go:72] duration metric: took 17.662607825s to wait for apiserver process to appear ...
	I0610 16:32:28.522889   36689 api_server.go:88] waiting for apiserver healthz status ...
	I0610 16:32:28.522905   36689 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0610 16:32:28.532258   36689 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0610 16:32:28.533229   36689 api_server.go:141] control plane version: v1.18.20
	I0610 16:32:28.533249   36689 api_server.go:131] duration metric: took 10.353659ms to wait for apiserver health ...
	I0610 16:32:28.533257   36689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 16:32:28.706700   36689 request.go:628] Waited for 173.37947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0610 16:32:28.712980   36689 system_pods.go:59] 8 kube-system pods found
	I0610 16:32:28.713017   36689 system_pods.go:61] "coredns-66bff467f8-njqvp" [6345f6d0-a18a-4e6e-90a0-a2e45ca92dc5] Running
	I0610 16:32:28.713025   36689 system_pods.go:61] "etcd-ingress-addon-legacy-879929" [b4de2135-c8d3-423c-b2ee-e70b0b23e42c] Running
	I0610 16:32:28.713030   36689 system_pods.go:61] "kindnet-wh9rr" [bb9386e6-03c6-4a27-9902-1dba691fcf9b] Running
	I0610 16:32:28.713061   36689 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-879929" [7395da46-61f7-4e67-bf97-53e3c9521aa7] Running
	I0610 16:32:28.713077   36689 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-879929" [9114cf6b-1f46-4a62-b3e5-dec6ecb0eb8b] Running
	I0610 16:32:28.713082   36689 system_pods.go:61] "kube-proxy-k29x9" [d575a6db-4733-4c76-b5b9-cef8977a7340] Running
	I0610 16:32:28.713087   36689 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-879929" [aa7f24ca-92ce-483e-9367-e4a2d4f8b0c9] Running
	I0610 16:32:28.713094   36689 system_pods.go:61] "storage-provisioner" [f97a7a5f-46fd-45a7-9560-7ffcbcc7d34a] Running
	I0610 16:32:28.713099   36689 system_pods.go:74] duration metric: took 179.837075ms to wait for pod list to return data ...
	I0610 16:32:28.713109   36689 default_sa.go:34] waiting for default service account to be created ...
	I0610 16:32:28.906514   36689 request.go:628] Waited for 193.327263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0610 16:32:28.909188   36689 default_sa.go:45] found service account: "default"
	I0610 16:32:28.909215   36689 default_sa.go:55] duration metric: took 196.099618ms for default service account to be created ...
	I0610 16:32:28.909225   36689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 16:32:29.106659   36689 request.go:628] Waited for 197.330105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0610 16:32:29.113024   36689 system_pods.go:86] 8 kube-system pods found
	I0610 16:32:29.113057   36689 system_pods.go:89] "coredns-66bff467f8-njqvp" [6345f6d0-a18a-4e6e-90a0-a2e45ca92dc5] Running
	I0610 16:32:29.113070   36689 system_pods.go:89] "etcd-ingress-addon-legacy-879929" [b4de2135-c8d3-423c-b2ee-e70b0b23e42c] Running
	I0610 16:32:29.113079   36689 system_pods.go:89] "kindnet-wh9rr" [bb9386e6-03c6-4a27-9902-1dba691fcf9b] Running
	I0610 16:32:29.113084   36689 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-879929" [7395da46-61f7-4e67-bf97-53e3c9521aa7] Running
	I0610 16:32:29.113090   36689 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-879929" [9114cf6b-1f46-4a62-b3e5-dec6ecb0eb8b] Running
	I0610 16:32:29.113100   36689 system_pods.go:89] "kube-proxy-k29x9" [d575a6db-4733-4c76-b5b9-cef8977a7340] Running
	I0610 16:32:29.113105   36689 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-879929" [aa7f24ca-92ce-483e-9367-e4a2d4f8b0c9] Running
	I0610 16:32:29.113110   36689 system_pods.go:89] "storage-provisioner" [f97a7a5f-46fd-45a7-9560-7ffcbcc7d34a] Running
	I0610 16:32:29.113118   36689 system_pods.go:126] duration metric: took 203.888681ms to wait for k8s-apps to be running ...
	I0610 16:32:29.113135   36689 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 16:32:29.113194   36689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:32:29.127125   36689 system_svc.go:56] duration metric: took 13.980611ms WaitForService to wait for kubelet.
	I0610 16:32:29.127153   36689 kubeadm.go:581] duration metric: took 18.266906751s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 16:32:29.127172   36689 node_conditions.go:102] verifying NodePressure condition ...
	I0610 16:32:29.306587   36689 request.go:628] Waited for 179.330462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0610 16:32:29.309447   36689 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0610 16:32:29.309476   36689 node_conditions.go:123] node cpu capacity is 2
	I0610 16:32:29.309489   36689 node_conditions.go:105] duration metric: took 182.31208ms to run NodePressure ...
	I0610 16:32:29.309522   36689 start.go:228] waiting for startup goroutines ...
	I0610 16:32:29.309537   36689 start.go:233] waiting for cluster config update ...
	I0610 16:32:29.309547   36689 start.go:242] writing updated cluster config ...
	I0610 16:32:29.309874   36689 ssh_runner.go:195] Run: rm -f paused
	I0610 16:32:29.374588   36689 start.go:573] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0610 16:32:29.376337   36689 out.go:177] 
	W0610 16:32:29.377847   36689 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0610 16:32:29.379454   36689 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0610 16:32:29.381002   36689 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-879929" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a93a865c7381c       13753a81eccfd       11 seconds ago       Exited              hello-world-app           2                   525c82d630adb       hello-world-app-5f5d8b66bb-mzw56
	9c6c0f08a1dd2       5ee47dcca7543       37 seconds ago       Running             nginx                     0                   3921ddfd522f0       nginx
	e16584f95adf6       d7f0cba3aa5bf       58 seconds ago       Exited              controller                0                   cd9b1a8eb3b28       ingress-nginx-controller-7fcf777cb7-grkp5
	d4c7aaca6de69       a883f7fc35610       About a minute ago   Exited              patch                     0                   c3ebac37e8d81       ingress-nginx-admission-patch-kxnbd
	3fc86cf1bc427       a883f7fc35610       About a minute ago   Exited              create                    0                   a5de4cdab5b10       ingress-nginx-admission-create-7mnrb
	afe1bf9c7078f       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   506b84d6b0723       coredns-66bff467f8-njqvp
	88ee39b699d52       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   5e9010256fe23       storage-provisioner
	13a91a90d3da3       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   d6cd7ec2d2f2c       kindnet-wh9rr
	bd4e21395fc28       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   85f50a3007653       kube-proxy-k29x9
	51a9963da8ba1       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   49a02c9e98c08       etcd-ingress-addon-legacy-879929
	4a8f603bae13b       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   6ffb4fbb1eccd       kube-apiserver-ingress-addon-legacy-879929
	52085491d2691       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   08b70caae2576       kube-controller-manager-ingress-addon-legacy-879929
	5ca0dfc9d8437       095f37015706d       About a minute ago   Running             kube-scheduler            0                   925b6e6b9e1cf       kube-scheduler-ingress-addon-legacy-879929
	
	* 
	* ==> containerd <==
	* Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.643728729Z" level=info msg="shim disconnected" id=e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.643787822Z" level=warning msg="cleaning up after shim disconnected" id=e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6 namespace=k8s.io
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.643800162Z" level=info msg="cleaning up dead shim"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.655139740Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:33:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4624 runtime=io.containerd.runc.v2\n"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.657904120Z" level=info msg="StopContainer for \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" returns successfully"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.658036278Z" level=info msg="StopContainer for \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" returns successfully"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.658704005Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\""
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.658780714Z" level=info msg="Container to stop \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.661166579Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\""
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.661350520Z" level=info msg="Container to stop \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.700286333Z" level=info msg="shim disconnected" id=cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.700358915Z" level=warning msg="cleaning up after shim disconnected" id=cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d namespace=k8s.io
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.700369770Z" level=info msg="cleaning up dead shim"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.712659361Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:33:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4661 runtime=io.containerd.runc.v2\ntime=\"2023-06-10T16:33:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.762124280Z" level=info msg="TearDown network for sandbox \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" successfully"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.762318806Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" returns successfully"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.783187671Z" level=info msg="TearDown network for sandbox \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" successfully"
	Jun 10 16:33:30 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:30.783239700Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" returns successfully"
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.817739661Z" level=info msg="StopContainer for \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" with timeout 2 (s)"
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.817806877Z" level=info msg="Container to stop \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.817888903Z" level=info msg="StopContainer for \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" returns successfully"
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.818638861Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\""
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.818711434Z" level=info msg="Container to stop \"e16584f95adf6f19580783b34c26424b01d7f086353e0d7de4dc0bbbc80033d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.849091915Z" level=info msg="TearDown network for sandbox \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" successfully"
	Jun 10 16:33:31 ingress-addon-legacy-879929 containerd[825]: time="2023-06-10T16:33:31.849141670Z" level=info msg="StopPodSandbox for \"cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d\" returns successfully"
	
	* 
	* ==> coredns [afe1bf9c7078fb00ac817738c3e31f5c30d14f0ff19bece374255016d488edea] <==
	* [INFO] 10.244.0.5:58087 - 473 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051265s
	[INFO] 10.244.0.5:58087 - 11508 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069218s
	[INFO] 10.244.0.5:37686 - 32905 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002214444s
	[INFO] 10.244.0.5:58087 - 25646 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0010223s
	[INFO] 10.244.0.5:37686 - 51702 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124225s
	[INFO] 10.244.0.5:58087 - 62945 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001235796s
	[INFO] 10.244.0.5:58087 - 59064 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068028s
	[INFO] 10.244.0.5:33939 - 19839 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109734s
	[INFO] 10.244.0.5:56934 - 19645 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068061s
	[INFO] 10.244.0.5:56934 - 33708 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053005s
	[INFO] 10.244.0.5:33939 - 22100 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004297s
	[INFO] 10.244.0.5:56934 - 56587 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047532s
	[INFO] 10.244.0.5:33939 - 51565 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047483s
	[INFO] 10.244.0.5:56934 - 31280 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000157513s
	[INFO] 10.244.0.5:56934 - 9669 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041772s
	[INFO] 10.244.0.5:33939 - 62609 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079261s
	[INFO] 10.244.0.5:56934 - 43938 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042592s
	[INFO] 10.244.0.5:33939 - 28040 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077948s
	[INFO] 10.244.0.5:33939 - 53527 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043618s
	[INFO] 10.244.0.5:56934 - 41402 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001226762s
	[INFO] 10.244.0.5:33939 - 43973 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000838606s
	[INFO] 10.244.0.5:56934 - 56731 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001347163s
	[INFO] 10.244.0.5:33939 - 8406 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001217974s
	[INFO] 10.244.0.5:56934 - 11837 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067109s
	[INFO] 10.244.0.5:33939 - 1143 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048254s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-879929
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-879929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=ingress-addon-legacy-879929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T16_31_54_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-879929
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:33:27 +0000   Sat, 10 Jun 2023 16:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:33:27 +0000   Sat, 10 Jun 2023 16:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:33:27 +0000   Sat, 10 Jun 2023 16:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:33:27 +0000   Sat, 10 Jun 2023 16:32:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-879929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 096c90c8246144aead43320d1c0ca9b0
	  System UUID:                3c0c808f-fcf7-4874-a258-d2398b3cdbe9
	  Boot ID:                    9a54dfd9-cc23-412f-8f4a-0089a0162bc0
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-mzw56                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-66bff467f8-njqvp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-879929                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-wh9rr                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-879929             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-879929    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-k29x9                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-879929             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 113s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x3 over 113s)  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x3 over 113s)  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x3 over 113s)  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-879929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-879929 status is now: NodeReady
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000772] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=00000000195d4e74{9p.inode} n=000000002c762050
	[  +0.001100] FS-Cache: N-key=[8] '8e385c0100000000'
	[  +0.003214] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=00000000195d4e74{9p.inode} n=000000001b669a8d
	[  +0.001150] FS-Cache: O-key=[8] '8e385c0100000000'
	[  +0.000754] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=00000000195d4e74{9p.inode} n=0000000078285458
	[  +0.001126] FS-Cache: N-key=[8] '8e385c0100000000'
	[  +2.686175] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001016] FS-Cache: O-cookie d=00000000195d4e74{9p.inode} n=0000000071a6ae55
	[  +0.001153] FS-Cache: O-key=[8] '8d385c0100000000'
	[  +0.000756] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=00000000195d4e74{9p.inode} n=0000000041f4d043
	[  +0.001110] FS-Cache: N-key=[8] '8d385c0100000000'
	[  +0.295573] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=00000000195d4e74{9p.inode} n=000000006d5382b1
	[  +0.001133] FS-Cache: O-key=[8] '95385c0100000000'
	[  +0.000779] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001027] FS-Cache: N-cookie d=00000000195d4e74{9p.inode} n=000000001cb4080b
	[  +0.001146] FS-Cache: N-key=[8] '95385c0100000000'
	[Jun10 16:31] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [51a9963da8ba1c79d01ac723e403845d9ca082b84abae232023b8db3ed570370] <==
	* raft2023/06/10 16:31:46 INFO: aec36adc501070cc became follower at term 0
	raft2023/06/10 16:31:46 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/06/10 16:31:46 INFO: aec36adc501070cc became follower at term 1
	raft2023/06/10 16:31:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-10 16:31:46.146264 W | auth: simple token is not cryptographically signed
	2023-06-10 16:31:46.230464 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-06-10 16:31:46.486579 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-06-10 16:31:46.626827 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-10 16:31:46.654363 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/06/10 16:31:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-10 16:31:46.655334 I | embed: listening for peers on 192.168.49.2:2380
	2023-06-10 16:31:46.655726 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/06/10 16:31:47 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/06/10 16:31:47 INFO: aec36adc501070cc became candidate at term 2
	raft2023/06/10 16:31:47 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/06/10 16:31:47 INFO: aec36adc501070cc became leader at term 2
	raft2023/06/10 16:31:47 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-06-10 16:31:47.206105 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-10 16:31:47.206383 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-10 16:31:47.206467 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-10 16:31:47.206499 I | etcdserver: published {Name:ingress-addon-legacy-879929 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-06-10 16:31:47.206602 I | embed: ready to serve client requests
	2023-06-10 16:31:47.207976 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-10 16:31:47.208080 I | embed: ready to serve client requests
	2023-06-10 16:31:47.209136 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  16:33:36 up 16 min,  0 users,  load average: 1.35, 1.46, 0.90
	Linux ingress-addon-legacy-879929 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [13a91a90d3da3118ebada6bc4ce0e706ea5c70967ca596eb80252ab55404e707] <==
	* I0610 16:32:12.013468       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 16:32:12.013548       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0610 16:32:12.013825       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:32:12.013851       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:32:12.013933       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:32:12.415033       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:32:12.415070       1 main.go:227] handling current node
	I0610 16:32:22.518117       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:32:22.518146       1 main.go:227] handling current node
	I0610 16:32:32.521535       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:32:32.521565       1 main.go:227] handling current node
	I0610 16:32:42.533623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:32:42.533652       1 main.go:227] handling current node
	I0610 16:32:52.545676       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:32:52.545704       1 main.go:227] handling current node
	I0610 16:33:02.557123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:33:02.557156       1 main.go:227] handling current node
	I0610 16:33:12.560434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:33:12.560463       1 main.go:227] handling current node
	I0610 16:33:22.565647       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:33:22.565679       1 main.go:227] handling current node
	I0610 16:33:32.569030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 16:33:32.569060       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4a8f603bae13b63a89229fd70512b6c751eb7cd1b486c0eac6aa2728c172bb50] <==
	* I0610 16:31:51.185076       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0610 16:31:51.219241       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0610 16:31:51.337925       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:31:51.337977       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0610 16:31:51.337997       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:31:51.365342       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:31:51.374552       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0610 16:31:52.163943       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0610 16:31:52.163971       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:31:52.177195       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0610 16:31:52.181230       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:31:52.181256       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0610 16:31:52.587848       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:31:52.637748       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0610 16:31:52.746949       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0610 16:31:52.748180       1 controller.go:609] quota admission added evaluator for: endpoints
	I0610 16:31:52.752358       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:31:53.605235       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0610 16:31:54.240429       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0610 16:31:54.360636       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0610 16:31:57.691059       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:32:09.388294       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:32:09.445114       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0610 16:32:30.053059       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0610 16:32:56.408196       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [52085491d2691f1c71fe93d48be6b5afa80394ffe0d8292e5f0053a3f1a29ddd] <==
	* I0610 16:32:09.731387       1 shared_informer.go:230] Caches are synced for service account 
	I0610 16:32:09.733731       1 shared_informer.go:230] Caches are synced for namespace 
	I0610 16:32:09.787084       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0610 16:32:09.833701       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 16:32:09.842946       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0610 16:32:09.862505       1 shared_informer.go:230] Caches are synced for expand 
	I0610 16:32:09.881284       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 16:32:09.881318       1 shared_informer.go:230] Caches are synced for PV protection 
	I0610 16:32:09.881619       1 shared_informer.go:230] Caches are synced for stateful set 
	I0610 16:32:09.885589       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 16:32:09.931896       1 shared_informer.go:230] Caches are synced for attach detach 
	I0610 16:32:09.932847       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0610 16:32:09.936022       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 16:32:09.936056       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 16:32:10.347883       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9b0bb44f-8bdc-44e5-8525-afbf3cc2a5ab", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0610 16:32:10.394844       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"caa3d0cc-c456-4413-830f-bd648560a3ac", APIVersion:"apps/v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-z84wh
	I0610 16:32:30.020739       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"69565387-d2d9-41f3-a208-962d786e71c4", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0610 16:32:30.033075       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"69d06501-5a8e-42e0-b9de-41cb7c50960b", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-grkp5
	I0610 16:32:30.119456       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"33835f10-ffe5-480f-acfc-8b903e83936c", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7mnrb
	I0610 16:32:30.257659       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ace64f01-5623-4099-8c0d-4e2b36ba70d3", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-kxnbd
	I0610 16:32:32.922661       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ace64f01-5623-4099-8c0d-4e2b36ba70d3", APIVersion:"batch/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 16:32:32.946078       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"33835f10-ffe5-480f-acfc-8b903e83936c", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 16:33:06.136581       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e4b3b070-15ab-4dfe-938f-91aba58a6c15", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0610 16:33:06.151344       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"19cf4db1-2c43-4488-9aee-ce41fe783676", APIVersion:"apps/v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-mzw56
	E0610 16:33:33.327141       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-cqkzv" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [bd4e21395fc28a1110ba49078d3c8e549b756cdb320cfbcc37e506adf814b780] <==
	* W0610 16:32:10.199882       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0610 16:32:10.214032       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0610 16:32:10.214258       1 server_others.go:186] Using iptables Proxier.
	I0610 16:32:10.214847       1 server.go:583] Version: v1.18.20
	I0610 16:32:10.218542       1 config.go:133] Starting endpoints config controller
	I0610 16:32:10.218580       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0610 16:32:10.218620       1 config.go:315] Starting service config controller
	I0610 16:32:10.218623       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0610 16:32:10.318786       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0610 16:32:10.320256       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5ca0dfc9d84379a8390c1bd8cadcf744aac22dde7b0fd29ce39b0afd0c11d2b7] <==
	* W0610 16:31:51.316499       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:31:51.316510       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 16:31:51.316518       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:31:51.365858       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0610 16:31:51.365943       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0610 16:31:51.367922       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0610 16:31:51.368663       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:31:51.368721       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:31:51.368761       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0610 16:31:51.374724       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:31:51.374847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:31:51.378800       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:31:51.379230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 16:31:51.380614       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 16:31:51.380823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:31:51.381010       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:31:51.381119       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:31:51.381222       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:31:51.381330       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:31:51.381438       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:31:51.388878       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:31:52.315486       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:31:52.424843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:31:52.456152       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:31:52.768855       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Jun 10 16:33:10 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:10.072680    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b0afceeeaee7a036ce6abaa5e1ece1bc818a3e03dc8dce591c282aa2061279a8
	Jun 10 16:33:10 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:10.073116    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9b43713b09769b7bb2b2958b288753e96be23f76765f000bd07c2b99f04b2d6a
	Jun 10 16:33:10 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:10.073368    1642 pod_workers.go:191] Error syncing pod ce78422d-5e14-463a-bdca-aa5b704f85b8 ("hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"
	Jun 10 16:33:11 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:11.076316    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9b43713b09769b7bb2b2958b288753e96be23f76765f000bd07c2b99f04b2d6a
	Jun 10 16:33:11 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:11.076568    1642 pod_workers.go:191] Error syncing pod ce78422d-5e14-463a-bdca-aa5b704f85b8 ("hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"
	Jun 10 16:33:14 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:14.812263    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2e7e0e5fba988c520026505043b9b90b5d0345d9e0205fc6fbd966c532d0193a
	Jun 10 16:33:14 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:14.813054    1642 pod_workers.go:191] Error syncing pod 69f57495-fcf9-4850-9858-b82ef8da9370 ("kube-ingress-dns-minikube_kube-system(69f57495-fcf9-4850-9858-b82ef8da9370)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(69f57495-fcf9-4850-9858-b82ef8da9370)"
	Jun 10 16:33:21 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:21.955870    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-tzlb2" (UniqueName: "kubernetes.io/secret/69f57495-fcf9-4850-9858-b82ef8da9370-minikube-ingress-dns-token-tzlb2") pod "69f57495-fcf9-4850-9858-b82ef8da9370" (UID: "69f57495-fcf9-4850-9858-b82ef8da9370")
	Jun 10 16:33:21 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:21.962392    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f57495-fcf9-4850-9858-b82ef8da9370-minikube-ingress-dns-token-tzlb2" (OuterVolumeSpecName: "minikube-ingress-dns-token-tzlb2") pod "69f57495-fcf9-4850-9858-b82ef8da9370" (UID: "69f57495-fcf9-4850-9858-b82ef8da9370"). InnerVolumeSpecName "minikube-ingress-dns-token-tzlb2". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:33:22 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:22.056232    1642 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-tzlb2" (UniqueName: "kubernetes.io/secret/69f57495-fcf9-4850-9858-b82ef8da9370-minikube-ingress-dns-token-tzlb2") on node "ingress-addon-legacy-879929" DevicePath ""
	Jun 10 16:33:24 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:24.102249    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2e7e0e5fba988c520026505043b9b90b5d0345d9e0205fc6fbd966c532d0193a
	Jun 10 16:33:24 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:24.812240    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9b43713b09769b7bb2b2958b288753e96be23f76765f000bd07c2b99f04b2d6a
	Jun 10 16:33:25 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:25.107403    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9b43713b09769b7bb2b2958b288753e96be23f76765f000bd07c2b99f04b2d6a
	Jun 10 16:33:25 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:25.107737    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a93a865c7381c48b8c26ac035621c7a4261df2d82357f762fe9054af79efdc4c
	Jun 10 16:33:25 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:25.107999    1642 pod_workers.go:191] Error syncing pod ce78422d-5e14-463a-bdca-aa5b704f85b8 ("hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mzw56_default(ce78422d-5e14-463a-bdca-aa5b704f85b8)"
	Jun 10 16:33:28 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:28.552531    1642 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-grkp5.1767595557271816", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-grkp5", UID:"1a161b4f-8d34-411d-a815-33b3aafbe06f", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-879929"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1194756205aa816, ext:94356838443, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1194756205aa816, ext:94356838443, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-grkp5.1767595557271816" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 16:33:28 ingress-addon-legacy-879929 kubelet[1642]: E0610 16:33:28.569529    1642 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-grkp5.1767595557271816", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-grkp5", UID:"1a161b4f-8d34-411d-a815-33b3aafbe06f", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-879929"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1194756205aa816, ext:94356838443, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc119475620ebd9ee, ext:94366353914, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-grkp5.1767595557271816" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 16:33:31 ingress-addon-legacy-879929 kubelet[1642]: W0610 16:33:31.121359    1642 pod_container_deletor.go:77] Container "cd9b1a8eb3b285c686d6baae30426bb90a13fb38d94c5aa66d7757b54ee0382d" not found in pod's containers
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.682315    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-webhook-cert") pod "1a161b4f-8d34-411d-a815-33b3aafbe06f" (UID: "1a161b4f-8d34-411d-a815-33b3aafbe06f")
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.684098    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-dncl5" (UniqueName: "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-ingress-nginx-token-dncl5") pod "1a161b4f-8d34-411d-a815-33b3aafbe06f" (UID: "1a161b4f-8d34-411d-a815-33b3aafbe06f")
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.691639    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1a161b4f-8d34-411d-a815-33b3aafbe06f" (UID: "1a161b4f-8d34-411d-a815-33b3aafbe06f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.691797    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-ingress-nginx-token-dncl5" (OuterVolumeSpecName: "ingress-nginx-token-dncl5") pod "1a161b4f-8d34-411d-a815-33b3aafbe06f" (UID: "1a161b4f-8d34-411d-a815-33b3aafbe06f"). InnerVolumeSpecName "ingress-nginx-token-dncl5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.784477    1642 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-webhook-cert") on node "ingress-addon-legacy-879929" DevicePath ""
	Jun 10 16:33:32 ingress-addon-legacy-879929 kubelet[1642]: I0610 16:33:32.784520    1642 reconciler.go:319] Volume detached for volume "ingress-nginx-token-dncl5" (UniqueName: "kubernetes.io/secret/1a161b4f-8d34-411d-a815-33b3aafbe06f-ingress-nginx-token-dncl5") on node "ingress-addon-legacy-879929" DevicePath ""
	Jun 10 16:33:33 ingress-addon-legacy-879929 kubelet[1642]: W0610 16:33:33.819005    1642 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1a161b4f-8d34-411d-a815-33b3aafbe06f/volumes" does not exist
	
	* 
	* ==> storage-provisioner [88ee39b699d52f86d7ecc5f288e2382e740b63970ff584dcb246ca7f0495f238] <==
	* I0610 16:32:13.220712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:32:13.232871       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:32:13.232955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:32:13.240193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:32:13.241203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-879929_abf91674-2132-47bb-94cc-2ce27ab86362!
	I0610 16:32:13.242194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c615232-681e-45e3-a2c1-21a191fc0358", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-879929_abf91674-2132-47bb-94cc-2ce27ab86362 became leader
	I0610 16:32:13.342114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-879929_abf91674-2132-47bb-94cc-2ce27ab86362!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-879929 -n ingress-addon-legacy-879929
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-879929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (58.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (82.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (1m2.058095771s)

                                                
                                                
-- stdout --
	! [missing-upgrade-715299] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-715299
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...
	* Deleting "missing-upgrade-715299" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-715299" running: temporary error created container "missing-upgrade-715299" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-715299" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-715299" running: temporary error created container "missing-upgrade-715299" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.641705297s)

                                                
                                                
-- stdout --
	* [missing-upgrade-715299] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-715299
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-715299" ...
	* Restarting existing docker container for "missing-upgrade-715299" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-715299", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-715299" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-715299", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2363493211.exe start -p missing-upgrade-715299 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.460714389s)

                                                
                                                
-- stdout --
	* [missing-upgrade-715299] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-715299
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-715299" ...
	* Restarting existing docker container for "missing-upgrade-715299" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-715299", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-715299" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-715299", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-06-10 16:54:39.377007127 +0000 UTC m=+1968.764508021
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-715299
helpers_test.go:235: (dbg) docker inspect missing-upgrade-715299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b",
	        "Created": "2023-06-10T16:54:08.559836939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2023-06-10T16:54:39.1344786Z",
	            "FinishedAt": "2023-06-10T16:54:39.133669467Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b/hostname",
	        "HostsPath": "/var/lib/docker/containers/683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b/hosts",
	        "LogPath": "/var/lib/docker/containers/683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b/683f5ebfae5b2fe9572b32269a12098ea1ba4bda15bbdf7c30a484985f52e56b-json.log",
	        "Name": "/missing-upgrade-715299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-715299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0ec2b6e163d5d94d52e00edd71644a417fa7d5831254f29495a0e184461cfee5-init/diff:/var/lib/docker/overlay2/0767ea555e90b543ae8a5599a301d6b8f51183fbb49645f580ba5e5f81acb19d/diff:/var/lib/docker/overlay2/c8c4726cae71f29cf5e0067a61d87431d25423897651991ee369d42188d5bd27/diff:/var/lib/docker/overlay2/6eda24656eb8facc2b7197c90a8d457211dcd2d8305e79bbc8844bf80c7524d8/diff:/var/lib/docker/overlay2/c3d9dfae182530e6c0d577e6cd1dc5e8a865850f75eb815edf1b1663141aa883/diff:/var/lib/docker/overlay2/7e4fbda00600f0b26b718d19754c49f833cbc0e8347c27f40204dc24f1b1dcfb/diff:/var/lib/docker/overlay2/9fa0a44fa35601980c31b5eec0b72292d61cd10cb6dc55834c9fc7fc8bf79de5/diff:/var/lib/docker/overlay2/50b8f465e2cbcb55b515d418e7c733c9bc9a1a0cd04bc7906ffd203bdb67565c/diff:/var/lib/docker/overlay2/7ffa1eb19c84e9865ee27e94d559e180acb11709f1f43dd528316f19e242fd87/diff:/var/lib/docker/overlay2/d8c3d66cca3ac67b4faf2383433e2b4f1886024d431503559c2b1410a626a8fc/diff:/var/lib/docker/overlay2/50f52e
a511d793065470112a0d7793f54bbbe0ed1a3fe9cfab9b47914fe17f50/diff:/var/lib/docker/overlay2/585818ba8652831845c2d75f9f5a094607c804553fa073a1cc165f5c9237acc8/diff:/var/lib/docker/overlay2/bf1785f3b0778b62ca589a8dff481d939284bec57f20a6f88d1c328935b9cbc4/diff:/var/lib/docker/overlay2/4e1810c2d4ad445656202a3e8cff4062608bebef6f67ed7d99c463dc8d7fef49/diff:/var/lib/docker/overlay2/b3b77e91e7c750caeae9977b9e44a5268fcfa5b64815e9d197c26b83ff2e460f/diff:/var/lib/docker/overlay2/a78d2518c5d907724587d24b6734af6b545aa38ced697ba1c51bc5cccddf296b/diff:/var/lib/docker/overlay2/694cf1dafa083fdf24711dd636267e4070f8cf1e7ca3dd5e1b8971f3c9fde04b/diff:/var/lib/docker/overlay2/53440c0ae51564b5f29ed524a039ebfdf475983bb9472617686de4659939c4dd/diff:/var/lib/docker/overlay2/8fe0f5f5d2d3244172342ce84109902bb0f02d688761964107fd9ea3c3c48c66/diff:/var/lib/docker/overlay2/d0c8bcbc4bd7a3591c304512a9f23cfe2b29d31754a87bff229e680a83152681/diff:/var/lib/docker/overlay2/0c94597f8da19553ba70d4a75994727479ee624cb7c12efa67bd32a874b509b9/diff:/var/lib/d
ocker/overlay2/95d4d652be2fd91253f41bd92e753beff67793443a3307bf1bd98d928ca52f14/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ec2b6e163d5d94d52e00edd71644a417fa7d5831254f29495a0e184461cfee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ec2b6e163d5d94d52e00edd71644a417fa7d5831254f29495a0e184461cfee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ec2b6e163d5d94d52e00edd71644a417fa7d5831254f29495a0e184461cfee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-715299",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-715299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-715299",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-715299",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-715299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bb8f8810e040f6d8c45976e34f4b4a4c1de0303d2b29ba7bbba9b24d4c65773",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/6bb8f8810e04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "04b83432d5598c47e43ec3e90a6c6d4391843845fa4cda01a94bd4f069afa023",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-715299 -n missing-upgrade-715299
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-715299 -n missing-upgrade-715299: exit status 7 (68.351041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-715299" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-715299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-715299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-715299: (4.229496028s)
--- FAIL: TestMissingContainerUpgrade (82.72s)

                                                
                                    

Test pass (265/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.45
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.41
10 TestDownloadOnly/v1.27.2/json-events 10.21
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.58
22 TestAddons/Setup 143.14
24 TestAddons/parallel/Registry 15.74
26 TestAddons/parallel/InspektorGadget 10.68
27 TestAddons/parallel/MetricsServer 5.71
30 TestAddons/parallel/CSI 48.98
31 TestAddons/parallel/Headlamp 13.63
32 TestAddons/parallel/CloudSpanner 5.42
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.37
37 TestCertOptions 42.15
38 TestCertExpiration 249.75
40 TestForceSystemdFlag 48.33
41 TestForceSystemdEnv 55.47
46 TestErrorSpam/setup 33.18
47 TestErrorSpam/start 0.84
48 TestErrorSpam/status 1.16
49 TestErrorSpam/pause 1.87
50 TestErrorSpam/unpause 1.93
51 TestErrorSpam/stop 1.46
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 85.46
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 18.44
58 TestFunctional/serial/KubeContext 0.06
59 TestFunctional/serial/KubectlGetPods 0.11
62 TestFunctional/serial/CacheCmd/cache/add_remote 4.27
63 TestFunctional/serial/CacheCmd/cache/add_local 1.34
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
65 TestFunctional/serial/CacheCmd/cache/list 0.07
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.47
68 TestFunctional/serial/CacheCmd/cache/delete 0.11
69 TestFunctional/serial/MinikubeKubectlCmd 0.14
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
71 TestFunctional/serial/ExtraConfig 55.48
72 TestFunctional/serial/ComponentHealth 0.11
73 TestFunctional/serial/LogsCmd 2.06
76 TestFunctional/parallel/ConfigCmd 0.47
77 TestFunctional/parallel/DashboardCmd 8.78
78 TestFunctional/parallel/DryRun 0.76
79 TestFunctional/parallel/InternationalLanguage 0.3
80 TestFunctional/parallel/StatusCmd 1.21
84 TestFunctional/parallel/ServiceCmdConnect 8.81
85 TestFunctional/parallel/AddonsCmd 0.17
86 TestFunctional/parallel/PersistentVolumeClaim 26.09
88 TestFunctional/parallel/SSHCmd 0.74
89 TestFunctional/parallel/CpCmd 1.48
91 TestFunctional/parallel/FileSync 0.4
92 TestFunctional/parallel/CertSync 2.24
96 TestFunctional/parallel/NodeLabels 0.09
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
100 TestFunctional/parallel/License 0.34
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 0.83
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
107 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
108 TestFunctional/parallel/ImageCommands/Setup 1.93
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
113 TestFunctional/parallel/ServiceCmd/DeployApp 9.31
116 TestFunctional/parallel/ServiceCmd/List 0.51
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
119 TestFunctional/parallel/ServiceCmd/Format 0.52
120 TestFunctional/parallel/ServiceCmd/URL 0.52
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.65
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
137 TestFunctional/parallel/ProfileCmd/profile_list 0.55
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
139 TestFunctional/parallel/MountCmd/any-port 7.33
140 TestFunctional/parallel/MountCmd/specific-port 1.59
141 TestFunctional/parallel/MountCmd/VerifyCleanup 2.66
142 TestFunctional/delete_addon-resizer_images 0.1
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 87.29
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.92
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.43
155 TestJSONOutput/start/Command 84.66
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.81
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.75
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.81
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.22
180 TestKicCustomNetwork/create_custom_network 44.92
181 TestKicCustomNetwork/use_default_bridge_network 36.34
182 TestKicExistingNetwork 35.42
183 TestKicCustomSubnet 33.29
184 TestKicStaticIP 39.66
185 TestMainNoArgs 0.05
186 TestMinikubeProfile 72.64
189 TestMountStart/serial/StartWithMountFirst 9.61
190 TestMountStart/serial/VerifyMountFirst 0.29
191 TestMountStart/serial/StartWithMountSecond 8.96
192 TestMountStart/serial/VerifyMountSecond 0.3
193 TestMountStart/serial/DeleteFirst 1.67
194 TestMountStart/serial/VerifyMountPostDelete 0.29
195 TestMountStart/serial/Stop 1.23
196 TestMountStart/serial/RestartStopped 7.44
197 TestMountStart/serial/VerifyMountPostStop 0.29
200 TestMultiNode/serial/FreshStart2Nodes 86.14
201 TestMultiNode/serial/DeployApp2Nodes 4.09
202 TestMultiNode/serial/PingHostFrom2Pods 1.1
203 TestMultiNode/serial/AddNode 30.03
204 TestMultiNode/serial/ProfileList 0.36
205 TestMultiNode/serial/CopyFile 10.92
206 TestMultiNode/serial/StopNode 2.36
207 TestMultiNode/serial/StartAfterStop 11.84
208 TestMultiNode/serial/RestartKeepsNodes 146.5
209 TestMultiNode/serial/DeleteNode 5.12
210 TestMultiNode/serial/StopMultiNode 24.12
211 TestMultiNode/serial/RestartMultiNode 99.36
212 TestMultiNode/serial/ValidateNameConflict 47.08
217 TestPreload 184.04
219 TestScheduledStopUnix 118.76
222 TestInsufficientStorage 13.14
223 TestRunningBinaryUpgrade 111.64
225 TestKubernetesUpgrade 437.98
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
229 TestNoKubernetes/serial/StartWithK8s 37.26
230 TestNoKubernetes/serial/StartWithStopK8s 30.33
231 TestNoKubernetes/serial/Start 6.07
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
233 TestNoKubernetes/serial/ProfileList 0.67
234 TestNoKubernetes/serial/Stop 1.24
235 TestNoKubernetes/serial/StartNoArgs 6.53
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
237 TestStoppedBinaryUpgrade/Setup 1.37
238 TestStoppedBinaryUpgrade/Upgrade 172.86
239 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
248 TestPause/serial/Start 88.54
249 TestPause/serial/SecondStartNoReconfiguration 18.79
250 TestPause/serial/Pause 0.87
251 TestPause/serial/VerifyStatus 0.37
252 TestPause/serial/Unpause 0.74
253 TestPause/serial/PauseAgain 0.94
254 TestPause/serial/DeletePaused 2.92
255 TestPause/serial/VerifyDeletedResources 0.59
263 TestNetworkPlugins/group/false 5.19
268 TestStartStop/group/old-k8s-version/serial/FirstStart 143.92
269 TestStartStop/group/old-k8s-version/serial/DeployApp 9.8
271 TestStartStop/group/no-preload/serial/FirstStart 72.38
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.66
273 TestStartStop/group/old-k8s-version/serial/Stop 13.92
274 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
275 TestStartStop/group/old-k8s-version/serial/SecondStart 681.06
276 TestStartStop/group/no-preload/serial/DeployApp 8.67
277 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
278 TestStartStop/group/no-preload/serial/Stop 12.12
279 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
280 TestStartStop/group/no-preload/serial/SecondStart 362.4
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
284 TestStartStop/group/no-preload/serial/Pause 3.36
286 TestStartStop/group/embed-certs/serial/FirstStart 59.89
287 TestStartStop/group/embed-certs/serial/DeployApp 9.49
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
289 TestStartStop/group/embed-certs/serial/Stop 12.19
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
291 TestStartStop/group/embed-certs/serial/SecondStart 351.13
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
295 TestStartStop/group/old-k8s-version/serial/Pause 3.49
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.9
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.53
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
302 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.03
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.03
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
305 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
306 TestStartStop/group/embed-certs/serial/Pause 3.36
308 TestStartStop/group/newest-cni/serial/FirstStart 44.63
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
311 TestStartStop/group/newest-cni/serial/Stop 1.3
312 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
313 TestStartStop/group/newest-cni/serial/SecondStart 40.9
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
317 TestStartStop/group/newest-cni/serial/Pause 3.24
318 TestNetworkPlugins/group/auto/Start 87.78
319 TestNetworkPlugins/group/auto/KubeletFlags 0.4
320 TestNetworkPlugins/group/auto/NetCatPod 10.54
321 TestNetworkPlugins/group/auto/DNS 0.37
322 TestNetworkPlugins/group/auto/Localhost 0.3
323 TestNetworkPlugins/group/auto/HairPin 0.29
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.04
325 TestNetworkPlugins/group/kindnet/Start 63.3
326 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.48
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.84
329 TestNetworkPlugins/group/calico/Start 84.13
330 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
331 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
332 TestNetworkPlugins/group/kindnet/NetCatPod 9.51
333 TestNetworkPlugins/group/kindnet/DNS 0.28
334 TestNetworkPlugins/group/kindnet/Localhost 0.23
335 TestNetworkPlugins/group/kindnet/HairPin 0.24
336 TestNetworkPlugins/group/custom-flannel/Start 77.04
337 TestNetworkPlugins/group/calico/ControllerPod 5.03
338 TestNetworkPlugins/group/calico/KubeletFlags 0.35
339 TestNetworkPlugins/group/calico/NetCatPod 10.44
340 TestNetworkPlugins/group/calico/DNS 0.29
341 TestNetworkPlugins/group/calico/Localhost 0.28
342 TestNetworkPlugins/group/calico/HairPin 0.24
343 TestNetworkPlugins/group/enable-default-cni/Start 57.59
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.5
346 TestNetworkPlugins/group/custom-flannel/DNS 0.34
347 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
348 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
349 TestNetworkPlugins/group/flannel/Start 59.47
350 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.49
351 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.61
352 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
353 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
354 TestNetworkPlugins/group/enable-default-cni/HairPin 0.28
355 TestNetworkPlugins/group/bridge/Start 87.63
356 TestNetworkPlugins/group/flannel/ControllerPod 5.04
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.5
358 TestNetworkPlugins/group/flannel/NetCatPod 11.59
359 TestNetworkPlugins/group/flannel/DNS 0.21
360 TestNetworkPlugins/group/flannel/Localhost 0.23
361 TestNetworkPlugins/group/flannel/HairPin 0.2
362 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
363 TestNetworkPlugins/group/bridge/NetCatPod 8.37
364 TestNetworkPlugins/group/bridge/DNS 0.21
365 TestNetworkPlugins/group/bridge/Localhost 0.18
366 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (11.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-313106 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-313106 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.446954686s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-313106
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-313106: exit status 85 (405.876647ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-313106 | jenkins | v1.30.1 | 10 Jun 23 16:21 UTC |          |
	|         | -p download-only-313106        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 16:21:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 16:21:50.707652    7531 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:21:50.707886    7531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:21:50.707915    7531 out.go:309] Setting ErrFile to fd 2...
	I0610 16:21:50.707934    7531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:21:50.708093    7531 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	W0610 16:21:50.708239    7531 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16578-2220/.minikube/config/config.json: open /home/jenkins/minikube-integration/16578-2220/.minikube/config/config.json: no such file or directory
	I0610 16:21:50.708675    7531 out.go:303] Setting JSON to true
	I0610 16:21:50.709435    7531 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":255,"bootTime":1686413856,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:21:50.709526    7531 start.go:137] virtualization:  
	I0610 16:21:50.712354    7531 out.go:97] [download-only-313106] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	W0610 16:21:50.712567    7531 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 16:21:50.712704    7531 notify.go:220] Checking for updates...
	I0610 16:21:50.719638    7531 out.go:169] MINIKUBE_LOCATION=16578
	I0610 16:21:50.721627    7531 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:21:50.723416    7531 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:21:50.725246    7531 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:21:50.726919    7531 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0610 16:21:50.730647    7531 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 16:21:50.730919    7531 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 16:21:50.759177    7531 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:21:50.759322    7531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:21:51.092358    7531 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-06-10 16:21:51.082106696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:21:51.092465    7531 docker.go:294] overlay module found
	I0610 16:21:51.094532    7531 out.go:97] Using the docker driver based on user configuration
	I0610 16:21:51.094557    7531 start.go:297] selected driver: docker
	I0610 16:21:51.094563    7531 start.go:875] validating driver "docker" against <nil>
	I0610 16:21:51.094684    7531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:21:51.156370    7531 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-06-10 16:21:51.14712338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:21:51.156534    7531 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 16:21:51.156825    7531 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0610 16:21:51.156990    7531 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 16:21:51.158973    7531 out.go:169] Using Docker driver with root privileges
	I0610 16:21:51.160850    7531 cni.go:84] Creating CNI manager for ""
	I0610 16:21:51.160866    7531 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:21:51.160881    7531 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 16:21:51.160896    7531 start_flags.go:319] config:
	{Name:download-only-313106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-313106 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:21:51.162711    7531 out.go:97] Starting control plane node download-only-313106 in cluster download-only-313106
	I0610 16:21:51.162747    7531 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0610 16:21:51.164259    7531 out.go:97] Pulling base image ...
	I0610 16:21:51.164279    7531 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0610 16:21:51.164409    7531 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 16:21:51.181644    7531 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 16:21:51.181792    7531 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 16:21:51.181903    7531 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 16:21:51.237949    7531 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0610 16:21:51.237978    7531 cache.go:57] Caching tarball of preloaded images
	I0610 16:21:51.238139    7531 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0610 16:21:51.240020    7531 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 16:21:51.240041    7531 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:21:51.367005    7531 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0610 16:21:56.978362    7531 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0610 16:21:59.728632    7531 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:21:59.728733    7531 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-313106"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (10.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-313106 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-313106 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.209649303s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (10.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-313106
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-313106: exit status 85 (71.592801ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-313106 | jenkins | v1.30.1 | 10 Jun 23 16:21 UTC |          |
	|         | -p download-only-313106        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-313106 | jenkins | v1.30.1 | 10 Jun 23 16:22 UTC |          |
	|         | -p download-only-313106        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 16:22:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 16:22:02.556542    7612 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:22:02.556792    7612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:22:02.556817    7612 out.go:309] Setting ErrFile to fd 2...
	I0610 16:22:02.556835    7612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:22:02.557044    7612 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	W0610 16:22:02.557205    7612 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16578-2220/.minikube/config/config.json: open /home/jenkins/minikube-integration/16578-2220/.minikube/config/config.json: no such file or directory
	I0610 16:22:02.557586    7612 out.go:303] Setting JSON to true
	I0610 16:22:02.558380    7612 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":267,"bootTime":1686413856,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:22:02.558499    7612 start.go:137] virtualization:  
	I0610 16:22:02.585126    7612 out.go:97] [download-only-313106] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0610 16:22:02.585606    7612 notify.go:220] Checking for updates...
	I0610 16:22:02.617515    7612 out.go:169] MINIKUBE_LOCATION=16578
	I0610 16:22:02.649097    7612 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:22:02.673788    7612 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:22:02.694311    7612 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:22:02.715310    7612 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0610 16:22:02.770527    7612 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 16:22:02.771177    7612 config.go:182] Loaded profile config "download-only-313106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0610 16:22:02.771223    7612 start.go:783] api.Load failed for download-only-313106: filestore "download-only-313106": Docker machine "download-only-313106" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 16:22:02.771379    7612 driver.go:375] Setting default libvirt URI to qemu:///system
	W0610 16:22:02.771401    7612 start.go:783] api.Load failed for download-only-313106: filestore "download-only-313106": Docker machine "download-only-313106" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 16:22:02.796378    7612 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:22:02.796458    7612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:22:02.910728    7612 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-06-10 16:22:02.901144924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:22:02.910830    7612 docker.go:294] overlay module found
	I0610 16:22:02.922599    7612 out.go:97] Using the docker driver based on existing profile
	I0610 16:22:02.922641    7612 start.go:297] selected driver: docker
	I0610 16:22:02.922649    7612 start.go:875] validating driver "docker" against &{Name:download-only-313106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-313106 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:22:02.922844    7612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:22:02.997241    7612 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-06-10 16:22:02.987443878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:22:02.997633    7612 cni.go:84] Creating CNI manager for ""
	I0610 16:22:02.997651    7612 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0610 16:22:02.997661    7612 start_flags.go:319] config:
	{Name:download-only-313106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-313106 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:22:03.008702    7612 out.go:97] Starting control plane node download-only-313106 in cluster download-only-313106
	I0610 16:22:03.008779    7612 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0610 16:22:03.016436    7612 out.go:97] Pulling base image ...
	I0610 16:22:03.016478    7612 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:03.016535    7612 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 16:22:03.035311    7612 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 16:22:03.035428    7612 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 16:22:03.035449    7612 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory, skipping pull
	I0610 16:22:03.035458    7612 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in cache, skipping pull
	I0610 16:22:03.035465    7612 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0610 16:22:03.088731    7612 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0610 16:22:03.088758    7612 cache.go:57] Caching tarball of preloaded images
	I0610 16:22:03.088927    7612 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:03.103598    7612 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0610 16:22:03.103628    7612 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:22:03.223261    7612 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f7a0ab28c8afe2dae72c45c225aaac8f -> /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0610 16:22:10.640221    7612 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:22:10.640341    7612 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16578-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 ...
	I0610 16:22:11.439879    7612 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on containerd
	I0610 16:22:11.440023    7612 profile.go:148] Saving config to /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/download-only-313106/config.json ...
	I0610 16:22:11.440234    7612 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0610 16:22:11.440430    7612 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16578-2220/.minikube/cache/linux/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-313106"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-313106
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-962832 --alsologtostderr --binary-mirror http://127.0.0.1:36323 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-962832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-962832
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/Setup (143.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-048679 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-048679 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m23.135795746s)
--- PASS: TestAddons/Setup (143.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 33.942749ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6j8b6" [38d3067f-658a-4d81-b7ec-82ba4bacfc27] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014387601s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ns9p4" [8fea9c13-b49b-4d01-b80f-f96d7210b726] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010743769s
addons_test.go:316: (dbg) Run:  kubectl --context addons-048679 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-048679 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-048679 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.722117806s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 ip
2023/06/10 16:24:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t9kqk" [0f718058-3c7f-497a-bc21-54e89a025d6f] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009042876s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-048679
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-048679: (5.668783935s)
--- PASS: TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.635356ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-rm647" [070b6f46-3316-4379-84dd-104bb4ee8773] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011046539s
addons_test.go:391: (dbg) Run:  kubectl --context addons-048679 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.731854ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-048679 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-048679 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ead93cb0-bdbc-486a-a963-795c7fde89f9] Pending
helpers_test.go:344: "task-pv-pod" [ead93cb0-bdbc-486a-a963-795c7fde89f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ead93cb0-bdbc-486a-a963-795c7fde89f9] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.012002882s
addons_test.go:560: (dbg) Run:  kubectl --context addons-048679 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-048679 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-048679 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-048679 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-048679 delete pod task-pv-pod: (1.125442335s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-048679 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-048679 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-048679 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [27307292-fb88-41dd-986d-8d73d8e56178] Pending
helpers_test.go:344: "task-pv-pod-restore" [27307292-fb88-41dd-986d-8d73d8e56178] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.011736267s
addons_test.go:602: (dbg) Run:  kubectl --context addons-048679 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-048679 delete pod task-pv-pod-restore: (1.022224227s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-048679 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-048679 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-048679 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.568116391s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-048679 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-048679 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-048679 --alsologtostderr -v=1: (1.610943709s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-qxv8w" [6d5c2703-3551-412c-a702-9d7d5fcbdb3d] Pending
helpers_test.go:344: "headlamp-6b5756787-qxv8w" [6d5c2703-3551-412c-a702-9d7d5fcbdb3d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-qxv8w" [6d5c2703-3551-412c-a702-9d7d5fcbdb3d] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.014769412s
--- PASS: TestAddons/parallel/Headlamp (13.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-c2ggz" [2b1e2c90-c6ab-45c6-a571-c88870f1fd81] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009214837s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-048679
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-048679 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-048679 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-048679
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-048679: (12.139166721s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-048679
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-048679
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-048679
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (42.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-287149 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-287149 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.425184922s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-287149 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-287149 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-287149 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-287149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-287149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-287149: (2.017444026s)
--- PASS: TestCertOptions (42.15s)

                                                
                                    
x
+
TestCertExpiration (249.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-123948 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0610 17:02:38.756304    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 17:02:40.651400    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-123948 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (50.109250543s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-123948 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-123948 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (17.316687829s)
helpers_test.go:175: Cleaning up "cert-expiration-123948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-123948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-123948: (2.316315474s)
--- PASS: TestCertExpiration (249.75s)

                                                
                                    
x
+
TestForceSystemdFlag (48.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-081066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-081066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.7716348s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-081066 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-081066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-081066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-081066: (2.195920388s)
--- PASS: TestForceSystemdFlag (48.33s)

                                                
                                    
x
+
TestForceSystemdEnv (55.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-819856 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:149: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-819856 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (53.15302636s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-819856 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-819856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-819856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-819856: (2.00921655s)
--- PASS: TestForceSystemdEnv (55.47s)

                                                
                                    
x
+
TestErrorSpam/setup (33.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-584494 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-584494 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-584494 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-584494 --driver=docker  --container-runtime=containerd: (33.178825167s)
--- PASS: TestErrorSpam/setup (33.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 stop: (1.262869403s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-584494 --log_dir /tmp/nospam-584494 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16578-2220/.minikube/files/etc/test/nested/copy/7526/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2229: (dbg) Done: out/minikube-linux-arm64 start -p functional-351441 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m25.462034046s)
--- PASS: TestFunctional/serial/StartWithProxy (85.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (18.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-arm64 start -p functional-351441 --alsologtostderr -v=8: (18.436679803s)
functional_test.go:658: soft start took 18.437240494s for "functional-351441" cluster.
--- PASS: TestFunctional/serial/SoftStart (18.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-351441 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:3.1: (1.460398916s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:3.3: (1.470347779s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 cache add registry.k8s.io/pause:latest: (1.335173626s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-351441 /tmp/TestFunctionalserialCacheCmdcacheadd_local4080569362/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache add minikube-local-cache-test:functional-351441
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache delete minikube-local-cache-test:functional-351441
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-351441
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (345.981002ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 cache reload: (1.458525452s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 kubectl -- --context functional-351441 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-351441 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0610 16:29:37.605625    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.613094    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.623329    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.643599    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.683836    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.764145    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:37.924484    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:38.244986    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:38.885822    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:40.166026    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:42.726321    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:47.846604    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:29:58.087082    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-arm64 start -p functional-351441 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.477867871s)
functional_test.go:756: restart took 55.477966053s for "functional-351441" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (55.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-351441 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 logs: (2.058444028s)
--- PASS: TestFunctional/serial/LogsCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 config get cpus: exit status 14 (82.131583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 config get cpus: exit status 14 (69.151311ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-351441 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-351441 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 35455: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-351441 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (270.912156ms)

                                                
                                                
-- stdout --
	* [functional-351441] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 16:30:48.906333   34675 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:30:48.907952   34675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:48.907968   34675 out.go:309] Setting ErrFile to fd 2...
	I0610 16:30:48.907975   34675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:48.908234   34675 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:30:48.908923   34675 out.go:303] Setting JSON to false
	I0610 16:30:48.910137   34675 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":793,"bootTime":1686413856,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:30:48.910209   34675 start.go:137] virtualization:  
	I0610 16:30:48.912701   34675 out.go:177] * [functional-351441] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0610 16:30:48.914788   34675 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 16:30:48.916391   34675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:30:48.914899   34675 notify.go:220] Checking for updates...
	I0610 16:30:48.920485   34675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:30:48.922054   34675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:30:48.923986   34675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0610 16:30:48.925406   34675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 16:30:48.927245   34675 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:30:48.927795   34675 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 16:30:48.986663   34675 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:30:48.986810   34675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:30:49.099280   34675 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-06-10 16:30:49.088256061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:30:49.099394   34675 docker.go:294] overlay module found
	I0610 16:30:49.101409   34675 out.go:177] * Using the docker driver based on existing profile
	I0610 16:30:49.103147   34675 start.go:297] selected driver: docker
	I0610 16:30:49.103164   34675 start.go:875] validating driver "docker" against &{Name:functional-351441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-351441 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:30:49.103299   34675 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 16:30:49.105281   34675 out.go:177] 
	W0610 16:30:49.106810   34675 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 16:30:49.108913   34675 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351441 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-351441 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (303.115151ms)

                                                
                                                
-- stdout --
	* [functional-351441] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 16:30:49.703696   34912 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:30:49.704026   34912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:49.704038   34912 out.go:309] Setting ErrFile to fd 2...
	I0610 16:30:49.704045   34912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:30:49.704661   34912 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:30:49.705640   34912 out.go:303] Setting JSON to false
	I0610 16:30:49.707035   34912 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":794,"bootTime":1686413856,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 16:30:49.707099   34912 start.go:137] virtualization:  
	I0610 16:30:49.709621   34912 out.go:177] * [functional-351441] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0610 16:30:49.711503   34912 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 16:30:49.713323   34912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 16:30:49.714999   34912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 16:30:49.714554   34912 notify.go:220] Checking for updates...
	I0610 16:30:49.721052   34912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 16:30:49.722609   34912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0610 16:30:49.724177   34912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 16:30:49.726274   34912 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:30:49.727340   34912 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 16:30:49.783298   34912 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 16:30:49.783438   34912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:30:49.889450   34912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-06-10 16:30:49.878895672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:30:49.889559   34912 docker.go:294] overlay module found
	I0610 16:30:49.892792   34912 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0610 16:30:49.894484   34912 start.go:297] selected driver: docker
	I0610 16:30:49.894500   34912 start.go:875] validating driver "docker" against &{Name:functional-351441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-351441 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 16:30:49.894637   34912 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 16:30:49.897079   34912 out.go:177] 
	W0610 16:30:49.898847   34912 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 16:30:49.900490   34912 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-351441 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-351441 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-hsr78" [545daec3-b3f2-4fd3-b654-022ace10f4e6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-hsr78" [545daec3-b3f2-4fd3-b654-022ace10f4e6] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.009874078s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:32718
functional_test.go:1673: http://192.168.49.2:32718: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-hsr78

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32718
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6551384b-70ea-4c2e-967c-42726008c70f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0087728s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-351441 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-351441 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351441 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351441 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [36ee8d52-1681-451d-9062-10fa3c188718] Pending
helpers_test.go:344: "sp-pod" [36ee8d52-1681-451d-9062-10fa3c188718] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [36ee8d52-1681-451d-9062-10fa3c188718] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.014821584s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-351441 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-351441 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-351441 delete -f testdata/storage-provisioner/pod.yaml: (1.564779802s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351441 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fc8ddef-7e2e-4fc6-803c-30850153f305] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fc8ddef-7e2e-4fc6-803c-30850153f305] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.016202416s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-351441 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh -n functional-351441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 cp functional-351441:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd403488051/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh -n functional-351441 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/7526/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /etc/test/nested/copy/7526/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/7526.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /etc/ssl/certs/7526.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/7526.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /usr/share/ca-certificates/7526.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/75262.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /etc/ssl/certs/75262.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/75262.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /usr/share/ca-certificates/75262.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-351441 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh "sudo systemctl is-active docker": exit status 1 (438.222351ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh "sudo systemctl is-active crio": exit status 1 (369.649874ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351441 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-351441
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351441 image ls --format short --alsologtostderr:
I0610 16:30:55.121302   35967 out.go:296] Setting OutFile to fd 1 ...
I0610 16:30:55.121551   35967 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:55.121579   35967 out.go:309] Setting ErrFile to fd 2...
I0610 16:30:55.121597   35967 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:55.121768   35967 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
I0610 16:30:55.122406   35967 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:55.122622   35967 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:55.123169   35967 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
I0610 16:30:55.144122   35967 ssh_runner.go:195] Run: systemctl --version
I0610 16:30:55.144179   35967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
I0610 16:30:55.174998   35967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
I0610 16:30:55.284741   35967 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351441 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-proxy                  | v1.27.2            | sha256:29921a | 21.4MB |
| docker.io/library/nginx                     | latest             | sha256:c42efe | 55.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/minikube-local-cache-test | functional-351441  | sha256:b4816b | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:5ee47d | 16.4MB |
| localhost/my-image                          | functional-351441  | sha256:307ecf | 831kB  |
| registry.k8s.io/kube-scheduler              | v1.27.2            | sha256:305d7e | 16.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/kube-apiserver              | v1.27.2            | sha256:72c9df | 30.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.2            | sha256:2ee705 | 28.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:24bc64 | 80.7MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351441 image ls --format table --alsologtostderr:
I0610 16:30:59.079781   36346 out.go:296] Setting OutFile to fd 1 ...
I0610 16:30:59.080014   36346 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:59.080043   36346 out.go:309] Setting ErrFile to fd 2...
I0610 16:30:59.080063   36346 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:59.080274   36346 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
I0610 16:30:59.080940   36346 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:59.081107   36346 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:59.081600   36346 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
I0610 16:30:59.103187   36346 ssh_runner.go:195] Run: systemctl --version
I0610 16:30:59.103236   36346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
I0610 16:30:59.121510   36346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
I0610 16:30:59.224639   36346 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351441 image ls --format json --alsologtostderr:
[{"id":"sha256:c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c","repoDigests":["docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"55764037"},{"id":"sha256:72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":["registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"30386736"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:5ee47dcca7543750b3941b52e98f103bbbae
9aaf574ab4dc018e1e7d34e505ad","repoDigests":["docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16367707"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:307ecf49eab650fef134a1065020e164f289879917c68a4b39bad68a95168d1b","repoDigests":[],"repoTags":["localhost/my-image:functional-351441"],"size":"830915"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDi
gests":["registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"28213131"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b4816b64589d81cee386cb05c9c29a235c8dd975223723d0b84b25e16fcdc3b8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-351441"],"size":"1006"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:
1.28.4-glibc"],"size":"1935750"},{"id":"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"80665728"},{"id":"sha256:29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"21369669"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951f
bcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"16545689"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351441 image ls --format json --alsologtostderr:
I0610 16:30:58.760622   36303 out.go:296] Setting OutFile to fd 1 ...
I0610 16:30:58.760824   36303 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:58.760847   36303 out.go:309] Setting ErrFile to fd 2...
I0610 16:30:58.760865   36303 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:58.761056   36303 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
I0610 16:30:58.761710   36303 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:58.761878   36303 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:58.762609   36303 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
I0610 16:30:58.786424   36303 ssh_runner.go:195] Run: systemctl --version
I0610 16:30:58.786524   36303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
I0610 16:30:58.817063   36303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
I0610 16:30:58.916706   36303 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351441 image ls --format yaml --alsologtostderr:
- id: sha256:305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "16545689"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad
repoDigests:
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "16367707"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "21369669"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b4816b64589d81cee386cb05c9c29a235c8dd975223723d0b84b25e16fcdc3b8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-351441
size: "1006"
- id: sha256:c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c
repoDigests:
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "55764037"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "28213131"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "30386736"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "80665728"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351441 image ls --format yaml --alsologtostderr:
I0610 16:30:55.441503   35995 out.go:296] Setting OutFile to fd 1 ...
I0610 16:30:55.441753   35995 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:55.441779   35995 out.go:309] Setting ErrFile to fd 2...
I0610 16:30:55.441798   35995 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:55.441993   35995 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
I0610 16:30:55.442717   35995 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:55.442887   35995 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:55.443404   35995 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
I0610 16:30:55.471002   35995 ssh_runner.go:195] Run: systemctl --version
I0610 16:30:55.471063   35995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
I0610 16:30:55.495879   35995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
I0610 16:30:55.610087   35995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh pgrep buildkitd: exit status 1 (376.610298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image build -t localhost/my-image:functional-351441 testdata/build --alsologtostderr
2023/06/10 16:30:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p functional-351441 image build -t localhost/my-image:functional-351441 testdata/build --alsologtostderr: (2.832134937s)
functional_test.go:321: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351441 image build -t localhost/my-image:functional-351441 testdata/build --alsologtostderr:
I0610 16:30:56.131721   36073 out.go:296] Setting OutFile to fd 1 ...
I0610 16:30:56.132316   36073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:56.132322   36073 out.go:309] Setting ErrFile to fd 2...
I0610 16:30:56.132328   36073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 16:30:56.132475   36073 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
I0610 16:30:56.133069   36073 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:56.133765   36073 config.go:182] Loaded profile config "functional-351441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0610 16:30:56.134228   36073 cli_runner.go:164] Run: docker container inspect functional-351441 --format={{.State.Status}}
I0610 16:30:56.164338   36073 ssh_runner.go:195] Run: systemctl --version
I0610 16:30:56.164386   36073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351441
I0610 16:30:56.187952   36073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/functional-351441/id_rsa Username:docker}
I0610 16:30:56.292384   36073 build_images.go:151] Building image from path: /tmp/build.3129986366.tar
I0610 16:30:56.292449   36073 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 16:30:56.311337   36073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3129986366.tar
I0610 16:30:56.317658   36073 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3129986366.tar: stat -c "%s %y" /var/lib/minikube/build/build.3129986366.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3129986366.tar': No such file or directory
I0610 16:30:56.317693   36073 ssh_runner.go:362] scp /tmp/build.3129986366.tar --> /var/lib/minikube/build/build.3129986366.tar (3072 bytes)
I0610 16:30:56.362043   36073 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3129986366
I0610 16:30:56.378150   36073 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3129986366 -xf /var/lib/minikube/build/build.3129986366.tar
I0610 16:30:56.390939   36073 containerd.go:378] Building image: /var/lib/minikube/build/build.3129986366
I0610 16:30:56.391024   36073 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3129986366 --local dockerfile=/var/lib/minikube/build/build.3129986366 --output type=image,name=localhost/my-image:functional-351441
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8e19390cda912d6253775c41e3ccf746eeba49be5b3199d980b046159d1968c3
#8 exporting manifest sha256:8e19390cda912d6253775c41e3ccf746eeba49be5b3199d980b046159d1968c3 0.0s done
#8 exporting config sha256:307ecf49eab650fef134a1065020e164f289879917c68a4b39bad68a95168d1b 0.0s done
#8 naming to localhost/my-image:functional-351441 done
#8 DONE 0.1s
I0610 16:30:58.857029   36073 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3129986366 --local dockerfile=/var/lib/minikube/build/build.3129986366 --output type=image,name=localhost/my-image:functional-351441: (2.465975557s)
I0610 16:30:58.857102   36073 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3129986366
I0610 16:30:58.874155   36073 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3129986366.tar
I0610 16:30:58.885001   36073 build_images.go:207] Built localhost/my-image:functional-351441 from /tmp/build.3129986366.tar
I0610 16:30:58.885041   36073 build_images.go:123] succeeded building to: functional-351441
I0610 16:30:58.885045   36073 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.898192919s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-351441
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-351441 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-351441 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-ww785" [906822cd-953e-41fe-a9e7-9146f2eed996] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-ww785" [906822cd-953e-41fe-a9e7-9146f2eed996] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.038810489s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service list -o json
functional_test.go:1492: Took "428.377716ms" to run "out/minikube-linux-arm64 -p functional-351441 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service --namespace=default --https --url hello-node
E0610 16:30:18.568196    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
functional_test.go:1520: found endpoint: https://192.168.49.2:31046
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:31046
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 32731: os: process already finished
helpers_test.go:508: unable to kill pid 32596: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image rm gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-351441 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2f4f87d5-e3fb-4565-9ae1-a00468bc781a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2f4f87d5-e3fb-4565-9ae1-a00468bc781a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.012406545s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-351441
functional_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 image save --daemon gcr.io/google-containers/addon-resizer:functional-351441 --alsologtostderr
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-351441
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-351441 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.56.103 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-351441 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1313: Took "477.369431ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1327: Took "77.258706ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1364: Took "358.633887ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1377: Took "54.779453ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdany-port3665813101/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686414641929624522" to /tmp/TestFunctionalparallelMountCmdany-port3665813101/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686414641929624522" to /tmp/TestFunctionalparallelMountCmdany-port3665813101/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686414641929624522" to /tmp/TestFunctionalparallelMountCmdany-port3665813101/001/test-1686414641929624522
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (419.711185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 16:30 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 16:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 16:30 test-1686414641929624522
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh cat /mount-9p/test-1686414641929624522
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-351441 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0f263182-7c47-48f1-a9f9-7d9a2322d388] Pending
helpers_test.go:344: "busybox-mount" [0f263182-7c47-48f1-a9f9-7d9a2322d388] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0f263182-7c47-48f1-a9f9-7d9a2322d388] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0f263182-7c47-48f1-a9f9-7d9a2322d388] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.012089143s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-351441 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdany-port3665813101/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdspecific-port2693177911/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdspecific-port2693177911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh "sudo umount -f /mount-9p": exit status 1 (363.882633ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-351441 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdspecific-port2693177911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T" /mount1: exit status 1 (1.128625927s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351441 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-351441 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351441 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2280994841/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-351441
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-351441
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-351441
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-879929 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0610 16:32:21.449086    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-879929 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m27.292357884s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons enable ingress --alsologtostderr -v=5: (8.916264141s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-879929 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-059178 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0610 16:34:37.605905    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-059178 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m24.654333529s)
--- PASS: TestJSONOutput/start/Command (84.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-059178 --output=json --user=testUser
E0610 16:35:05.289385    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-059178 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-059178 --output=json --user=testUser
E0610 16:35:08.386668    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.391867    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.402613    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.422888    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.463091    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.543423    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:08.703890    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:09.024691    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:09.665610    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:10.946091    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-059178 --output=json --user=testUser: (5.809798988s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-556264 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-556264 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.357435ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"919c8a6d-3d58-4a44-8550-a341c81d82d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-556264] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f98fd8c1-07df-42ca-b52e-dc6f78de1f1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16578"}}
	{"specversion":"1.0","id":"ba9d4eef-4d52-493f-9b49-9b859f440fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"705928d7-c306-4607-8772-362a5b2239f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig"}}
	{"specversion":"1.0","id":"972bd033-01f7-46a5-af47-eb5eeeec2f29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube"}}
	{"specversion":"1.0","id":"09421fb0-80e4-4b77-9cf0-51bf85aa0b0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"307871bf-8865-483a-81d5-ffa0d2919cb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b9a9899-df1b-45b9-8213-49a699ca5167","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-556264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-556264
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-457753 --network=
E0610 16:35:18.627421    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:28.868266    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:35:49.348460    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-457753 --network=: (42.716173284s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-457753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-457753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-457753: (2.17903557s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-119095 --network=bridge
E0610 16:36:30.308657    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-119095 --network=bridge: (34.375436419s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-119095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-119095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-119095: (1.941895061s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.34s)

                                                
                                    
x
+
TestKicExistingNetwork (35.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-607668 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-607668 --network=existing-network: (33.319074379s)
helpers_test.go:175: Cleaning up "existing-network-607668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-607668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-607668: (1.937564072s)
--- PASS: TestKicExistingNetwork (35.42s)

                                                
                                    
x
+
TestKicCustomSubnet (33.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-684994 --subnet=192.168.60.0/24
E0610 16:37:38.756680    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:38.761920    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:38.772134    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:38.792347    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:38.832567    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:38.912811    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:39.073140    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:39.393630    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:40.034488    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:41.314690    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:43.875608    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-684994 --subnet=192.168.60.0/24: (31.231875117s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-684994 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-684994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-684994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-684994: (2.037434299s)
--- PASS: TestKicCustomSubnet (33.29s)

                                                
                                    
x
+
TestKicStaticIP (39.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-604608 --static-ip=192.168.200.200
E0610 16:37:48.996439    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:37:52.228872    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 16:37:59.236634    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:38:19.716831    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-604608 --static-ip=192.168.200.200: (37.337928203s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-604608 ip
helpers_test.go:175: Cleaning up "static-ip-604608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-604608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-604608: (2.096239486s)
--- PASS: TestKicStaticIP (39.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-344137 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-344137 --driver=docker  --container-runtime=containerd: (30.754790621s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-346893 --driver=docker  --container-runtime=containerd
E0610 16:39:00.677735    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-346893 --driver=docker  --container-runtime=containerd: (36.454351943s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-344137
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-346893
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-346893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-346893
E0610 16:39:37.604875    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-346893: (1.994612536s)
helpers_test.go:175: Cleaning up "first-344137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-344137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-344137: (2.186810734s)
--- PASS: TestMinikubeProfile (72.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-484222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-484222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.608071237s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-484222 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-486035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-486035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.958889835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-486035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-484222 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-484222 --alsologtostderr -v=5: (1.665084251s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-486035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-486035
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-486035: (1.234344515s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-486035
E0610 16:40:08.386832    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-486035: (6.440095805s)
--- PASS: TestMountStart/serial/RestartStopped (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-486035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-966600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0610 16:40:22.598856    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:40:36.098696    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-966600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m25.583941835s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-966600 -- rollout status deployment/busybox: (1.910407466s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-gkkcj -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-hl67q -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-gkkcj -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-hl67q -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-gkkcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-hl67q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-gkkcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-gkkcj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-hl67q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-966600 -- exec busybox-67b7f59bb-hl67q -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-966600 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-966600 -v 3 --alsologtostderr: (29.333499258s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.03s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp testdata/cp-test.txt multinode-966600:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile78923627/001/cp-test_multinode-966600.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600:/home/docker/cp-test.txt multinode-966600-m02:/home/docker/cp-test_multinode-966600_multinode-966600-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test_multinode-966600_multinode-966600-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600:/home/docker/cp-test.txt multinode-966600-m03:/home/docker/cp-test_multinode-966600_multinode-966600-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test_multinode-966600_multinode-966600-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp testdata/cp-test.txt multinode-966600-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile78923627/001/cp-test_multinode-966600-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m02:/home/docker/cp-test.txt multinode-966600:/home/docker/cp-test_multinode-966600-m02_multinode-966600.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test_multinode-966600-m02_multinode-966600.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m02:/home/docker/cp-test.txt multinode-966600-m03:/home/docker/cp-test_multinode-966600-m02_multinode-966600-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test_multinode-966600-m02_multinode-966600-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp testdata/cp-test.txt multinode-966600-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile78923627/001/cp-test_multinode-966600-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m03:/home/docker/cp-test.txt multinode-966600:/home/docker/cp-test_multinode-966600-m03_multinode-966600.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600 "sudo cat /home/docker/cp-test_multinode-966600-m03_multinode-966600.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 cp multinode-966600-m03:/home/docker/cp-test.txt multinode-966600-m02:/home/docker/cp-test_multinode-966600-m03_multinode-966600-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 ssh -n multinode-966600-m02 "sudo cat /home/docker/cp-test_multinode-966600-m03_multinode-966600-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-966600 node stop m03: (1.238213429s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-966600 status: exit status 7 (562.331638ms)

                                                
                                                
-- stdout --
	multinode-966600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-966600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-966600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr: exit status 7 (559.587315ms)

                                                
                                                
-- stdout --
	multinode-966600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-966600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-966600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 16:42:26.422765   83436 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:42:26.422936   83436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:42:26.422947   83436 out.go:309] Setting ErrFile to fd 2...
	I0610 16:42:26.422954   83436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:42:26.423167   83436 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:42:26.423376   83436 out.go:303] Setting JSON to false
	I0610 16:42:26.423451   83436 mustload.go:65] Loading cluster: multinode-966600
	I0610 16:42:26.423538   83436 notify.go:220] Checking for updates...
	I0610 16:42:26.423884   83436 config.go:182] Loaded profile config "multinode-966600": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:42:26.423905   83436 status.go:255] checking status of multinode-966600 ...
	I0610 16:42:26.424728   83436 cli_runner.go:164] Run: docker container inspect multinode-966600 --format={{.State.Status}}
	I0610 16:42:26.445276   83436 status.go:330] multinode-966600 host status = "Running" (err=<nil>)
	I0610 16:42:26.445298   83436 host.go:66] Checking if "multinode-966600" exists ...
	I0610 16:42:26.445605   83436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-966600
	I0610 16:42:26.472799   83436 host.go:66] Checking if "multinode-966600" exists ...
	I0610 16:42:26.473128   83436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 16:42:26.473176   83436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966600
	I0610 16:42:26.501822   83436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/multinode-966600/id_rsa Username:docker}
	I0610 16:42:26.600944   83436 ssh_runner.go:195] Run: systemctl --version
	I0610 16:42:26.606396   83436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:42:26.619868   83436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 16:42:26.685508   83436 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-10 16:42:26.673748782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 16:42:26.686123   83436 kubeconfig.go:92] found "multinode-966600" server: "https://192.168.58.2:8443"
	I0610 16:42:26.686145   83436 api_server.go:166] Checking apiserver status ...
	I0610 16:42:26.686188   83436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 16:42:26.700490   83436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	I0610 16:42:26.712404   83436 api_server.go:182] apiserver freezer: "13:freezer:/docker/18a0ac8cddefd0b4eb8f83070d555cb5bc5ca0c0bc26c658a74ae53b7be9c6fc/kubepods/burstable/pod76ea9473cfed4082518faf8f7d6dbed1/ae572a8fd9933ed555e92db6a27cc47e1e121ce5a0461fce998b148fbc7534f8"
	I0610 16:42:26.712482   83436 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18a0ac8cddefd0b4eb8f83070d555cb5bc5ca0c0bc26c658a74ae53b7be9c6fc/kubepods/burstable/pod76ea9473cfed4082518faf8f7d6dbed1/ae572a8fd9933ed555e92db6a27cc47e1e121ce5a0461fce998b148fbc7534f8/freezer.state
	I0610 16:42:26.724511   83436 api_server.go:204] freezer state: "THAWED"
	I0610 16:42:26.724550   83436 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0610 16:42:26.733888   83436 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0610 16:42:26.733915   83436 status.go:421] multinode-966600 apiserver status = Running (err=<nil>)
	I0610 16:42:26.733925   83436 status.go:257] multinode-966600 status: &{Name:multinode-966600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 16:42:26.733942   83436 status.go:255] checking status of multinode-966600-m02 ...
	I0610 16:42:26.734300   83436 cli_runner.go:164] Run: docker container inspect multinode-966600-m02 --format={{.State.Status}}
	I0610 16:42:26.757481   83436 status.go:330] multinode-966600-m02 host status = "Running" (err=<nil>)
	I0610 16:42:26.757504   83436 host.go:66] Checking if "multinode-966600-m02" exists ...
	I0610 16:42:26.757814   83436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-966600-m02
	I0610 16:42:26.775912   83436 host.go:66] Checking if "multinode-966600-m02" exists ...
	I0610 16:42:26.776301   83436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 16:42:26.776359   83436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966600-m02
	I0610 16:42:26.795842   83436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16578-2220/.minikube/machines/multinode-966600-m02/id_rsa Username:docker}
	I0610 16:42:26.896860   83436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 16:42:26.910666   83436 status.go:257] multinode-966600-m02 status: &{Name:multinode-966600-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 16:42:26.910697   83436 status.go:255] checking status of multinode-966600-m03 ...
	I0610 16:42:26.911013   83436 cli_runner.go:164] Run: docker container inspect multinode-966600-m03 --format={{.State.Status}}
	I0610 16:42:26.928696   83436 status.go:330] multinode-966600-m03 host status = "Stopped" (err=<nil>)
	I0610 16:42:26.928722   83436 status.go:343] host is not running, skipping remaining checks
	I0610 16:42:26.928728   83436 status.go:257] multinode-966600-m03 status: &{Name:multinode-966600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-966600 node start m03 --alsologtostderr: (10.983806163s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
E0610 16:42:38.761370    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/StartAfterStop (11.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (146.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-966600
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-966600
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-966600: (25.105500772s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-966600 --wait=true -v=8 --alsologtostderr
E0610 16:43:06.439068    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 16:44:37.605080    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-966600 --wait=true -v=8 --alsologtostderr: (2m1.257773791s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-966600
--- PASS: TestMultiNode/serial/RestartKeepsNodes (146.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 node delete m03
E0610 16:45:08.386959    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-966600 node delete m03: (4.38503046s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-966600 stop: (23.933602264s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-966600 status: exit status 7 (94.074688ms)

                                                
                                                
-- stdout --
	multinode-966600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-966600-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr: exit status 7 (90.162107ms)

                                                
                                                
-- stdout --
	multinode-966600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-966600-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 16:45:34.466073   92051 out.go:296] Setting OutFile to fd 1 ...
	I0610 16:45:34.466230   92051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:45:34.466237   92051 out.go:309] Setting ErrFile to fd 2...
	I0610 16:45:34.466242   92051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 16:45:34.466397   92051 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 16:45:34.466604   92051 out.go:303] Setting JSON to false
	I0610 16:45:34.466656   92051 mustload.go:65] Loading cluster: multinode-966600
	I0610 16:45:34.466773   92051 notify.go:220] Checking for updates...
	I0610 16:45:34.467134   92051 config.go:182] Loaded profile config "multinode-966600": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 16:45:34.467150   92051 status.go:255] checking status of multinode-966600 ...
	I0610 16:45:34.467635   92051 cli_runner.go:164] Run: docker container inspect multinode-966600 --format={{.State.Status}}
	I0610 16:45:34.488148   92051 status.go:330] multinode-966600 host status = "Stopped" (err=<nil>)
	I0610 16:45:34.488173   92051 status.go:343] host is not running, skipping remaining checks
	I0610 16:45:34.488180   92051 status.go:257] multinode-966600 status: &{Name:multinode-966600 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 16:45:34.488215   92051 status.go:255] checking status of multinode-966600-m02 ...
	I0610 16:45:34.488512   92051 cli_runner.go:164] Run: docker container inspect multinode-966600-m02 --format={{.State.Status}}
	I0610 16:45:34.508878   92051 status.go:330] multinode-966600-m02 host status = "Stopped" (err=<nil>)
	I0610 16:45:34.508898   92051 status.go:343] host is not running, skipping remaining checks
	I0610 16:45:34.508904   92051 status.go:257] multinode-966600-m02 status: &{Name:multinode-966600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (99.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-966600 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0610 16:46:00.650244    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-966600 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m38.604018754s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-966600 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (99.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-966600
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-966600-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-966600-m02 --driver=docker  --container-runtime=containerd: exit status 14 (86.231701ms)

                                                
                                                
-- stdout --
	* [multinode-966600-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-966600-m02' is duplicated with machine name 'multinode-966600-m02' in profile 'multinode-966600'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-966600-m03 --driver=docker  --container-runtime=containerd
E0610 16:47:38.756176    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-966600-m03 --driver=docker  --container-runtime=containerd: (44.189003354s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-966600
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-966600: exit status 80 (609.996751ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-966600
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-966600-m03 already exists in multinode-966600-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-966600-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-966600-m03: (2.137461169s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.08s)

                                                
                                    
x
+
TestPreload (184.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-164580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-164580 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.199988635s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-164580 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-164580 image pull gcr.io/k8s-minikube/busybox: (1.396369278s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-164580
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-164580: (12.085523869s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-164580 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0610 16:49:37.605754    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 16:50:08.386085    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-164580 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m34.753316679s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-164580 image list
helpers_test.go:175: Cleaning up "test-preload-164580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-164580
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-164580: (2.361393434s)
--- PASS: TestPreload (184.04s)

                                                
                                    
x
+
TestScheduledStopUnix (118.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-836537 --memory=2048 --driver=docker  --container-runtime=containerd
E0610 16:51:31.458881    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-836537 --memory=2048 --driver=docker  --container-runtime=containerd: (42.082380293s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-836537 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-836537 -n scheduled-stop-836537
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-836537 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-836537 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-836537 -n scheduled-stop-836537
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-836537
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-836537 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0610 16:52:38.756977    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-836537
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-836537: exit status 7 (67.099358ms)

                                                
                                                
-- stdout --
	scheduled-stop-836537
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-836537 -n scheduled-stop-836537
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-836537 -n scheduled-stop-836537: exit status 7 (66.534579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-836537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-836537
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-836537: (5.050955401s)
--- PASS: TestScheduledStopUnix (118.76s)

                                                
                                    
x
+
TestInsufficientStorage (13.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-621741 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-621741 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.620445071s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d39fbd67-b2a5-47ff-8dec-87e838e99da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-621741] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6389eb74-ff03-4fec-bd8a-6d599203c7dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16578"}}
	{"specversion":"1.0","id":"f86f3587-4cd3-41c7-b8cd-0638c386648e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba488b2a-6587-415b-af50-32c65fe397d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig"}}
	{"specversion":"1.0","id":"24d18657-9155-46a8-9233-cb5e760cca01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube"}}
	{"specversion":"1.0","id":"d340baa4-aa8c-4a92-aa63-b166350a0426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"98ab7a7e-fefc-4435-97d9-ee25dda6f243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"76a79fb9-0700-4a22-a581-9e28292c519e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c89789a7-64fb-42b9-96b7-e97038598201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9f82ac9a-1fd5-4724-804d-b9ae4e1eb833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd36b552-f74b-4fee-95f1-6414a71e04c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c3f6627d-0fb1-4df0-af4e-98191ce5db2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-621741 in cluster insufficient-storage-621741","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"493d9e56-3536-475a-82b2-a6ccfea939f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c61d4e2-61df-4ff0-9db4-076950e8ccf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4acc15a0-0fec-4e22-a39e-e2898cb6f552","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621741 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621741 --output=json --layout=cluster: exit status 7 (299.788286ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621741","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621741","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 16:53:18.739503  109645 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-621741" does not appear in /home/jenkins/minikube-integration/16578-2220/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621741 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621741 --output=json --layout=cluster: exit status 7 (320.408186ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621741","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621741","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 16:53:19.060626  109698 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-621741" does not appear in /home/jenkins/minikube-integration/16578-2220/kubeconfig
	E0610 16:53:19.072464  109698 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/insufficient-storage-621741/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-621741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-621741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-621741: (1.898581246s)
--- PASS: TestInsufficientStorage (13.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (111.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.72303252.exe start -p running-upgrade-922557 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.72303252.exe start -p running-upgrade-922557 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m15.821619303s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-922557 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-922557 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.88447825s)
helpers_test.go:175: Cleaning up "running-upgrade-922557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-922557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-922557: (2.701650182s)
--- PASS: TestRunningBinaryUpgrade (111.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (437.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.256271265s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-677214
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-677214: (1.466501611s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-677214 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-677214 status --format={{.Host}}: exit status 7 (77.540127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m15.655253611s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-677214 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (85.963931ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-677214] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-677214
	    minikube start -p kubernetes-upgrade-677214 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6772142 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-677214 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-677214 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.686662505s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-677214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-677214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-677214: (2.634847893s)
--- PASS: TestKubernetesUpgrade (437.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (78.289854ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-984704] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984704 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984704 --driver=docker  --container-runtime=containerd: (36.877064259s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-984704 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --driver=docker  --container-runtime=containerd
E0610 16:54:01.799319    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --driver=docker  --container-runtime=containerd: (28.086974061s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-984704 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-984704 status -o json: exit status 2 (323.273225ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-984704","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-984704
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-984704: (1.922033064s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984704 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.071530831s)
--- PASS: TestNoKubernetes/serial/Start (6.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-984704 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-984704 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.9679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-984704
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-984704: (1.238766816s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-984704 --driver=docker  --container-runtime=containerd
E0610 16:54:37.605096    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-984704 --driver=docker  --container-runtime=containerd: (6.530921952s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-984704 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-984704 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.402464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.831924810.exe start -p stopped-upgrade-118658 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0610 16:55:08.386487    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.831924810.exe start -p stopped-upgrade-118658 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m40.714578726s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.831924810.exe -p stopped-upgrade-118658 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.831924810.exe -p stopped-upgrade-118658 stop: (20.630306068s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-118658 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0610 16:57:38.756646    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-118658 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (51.518890711s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-118658
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-118658: (1.416263683s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (88.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-977012 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0610 16:59:37.605741    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 17:00:08.386746    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-977012 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m28.539203662s)
--- PASS: TestPause/serial/Start (88.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-977012 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-977012 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.758905492s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-977012 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-977012 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-977012 --output=json --layout=cluster: exit status 2 (367.806362ms)

                                                
                                                
-- stdout --
	{"Name":"pause-977012","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-977012","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-977012 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-977012 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-977012 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-977012 --alsologtostderr -v=5: (2.917473733s)
--- PASS: TestPause/serial/DeletePaused (2.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-977012
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-977012: exit status 1 (20.306366ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-977012: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-arm64 start -p false-019039 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-019039 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (340.546363ms)

                                                
                                                
-- stdout --
	* [false-019039] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 17:02:07.340163  145128 out.go:296] Setting OutFile to fd 1 ...
	I0610 17:02:07.340449  145128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 17:02:07.340481  145128 out.go:309] Setting ErrFile to fd 2...
	I0610 17:02:07.340500  145128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 17:02:07.340723  145128 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16578-2220/.minikube/bin
	I0610 17:02:07.341213  145128 out.go:303] Setting JSON to false
	I0610 17:02:07.342312  145128 start.go:127] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2672,"bootTime":1686413856,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0610 17:02:07.342413  145128 start.go:137] virtualization:  
	I0610 17:02:07.346125  145128 out.go:177] * [false-019039] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0610 17:02:07.350767  145128 notify.go:220] Checking for updates...
	I0610 17:02:07.354405  145128 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 17:02:07.356116  145128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 17:02:07.357774  145128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16578-2220/kubeconfig
	I0610 17:02:07.359353  145128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16578-2220/.minikube
	I0610 17:02:07.361127  145128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0610 17:02:07.362944  145128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 17:02:07.365230  145128 config.go:182] Loaded profile config "force-systemd-flag-081066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0610 17:02:07.365439  145128 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 17:02:07.439980  145128 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 17:02:07.440145  145128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 17:02:07.606816  145128 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-06-10 17:02:07.592192157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0610 17:02:07.606926  145128 docker.go:294] overlay module found
	I0610 17:02:07.609053  145128 out.go:177] * Using the docker driver based on user configuration
	I0610 17:02:07.610749  145128 start.go:297] selected driver: docker
	I0610 17:02:07.610765  145128 start.go:875] validating driver "docker" against <nil>
	I0610 17:02:07.610778  145128 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 17:02:07.613011  145128 out.go:177] 
	W0610 17:02:07.614671  145128 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0610 17:02:07.616316  145128 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-019039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-019039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019039"

                                                
                                                
----------------------- debugLogs end: false-019039 [took: 4.536671144s] --------------------------------
helpers_test.go:175: Cleaning up "false-019039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-019039
--- PASS: TestNetworkPlugins/group/false (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-781573 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0610 17:04:37.605871    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 17:05:08.386684    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-781573 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m23.916477348s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-781573 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9840d113-18b7-4078-8148-2e271619e112] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9840d113-18b7-4078-8148-2e271619e112] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.043331325s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-781573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-180787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-180787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m12.38349s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-781573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-781573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.496544703s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-781573 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-781573 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-781573 --alsologtostderr -v=3: (13.921055772s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-781573 -n old-k8s-version-781573
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-781573 -n old-k8s-version-781573: exit status 7 (108.578519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-781573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (681.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-781573 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0610 17:07:38.756751    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-781573 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m20.679064457s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-781573 -n old-k8s-version-781573
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (681.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-180787 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ccce8be-ad35-44f6-8585-7620ac7f90aa] Pending
helpers_test.go:344: "busybox" [2ccce8be-ad35-44f6-8585-7620ac7f90aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ccce8be-ad35-44f6-8585-7620ac7f90aa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.042534074s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-180787 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-180787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-180787 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-180787 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-180787 --alsologtostderr -v=3: (12.124849621s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180787 -n no-preload-180787
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180787 -n no-preload-180787: exit status 7 (77.159336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-180787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (362.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-180787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:08:11.459682    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 17:09:37.605843    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 17:10:08.386703    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 17:10:41.800358    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 17:12:38.756942    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-180787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (6m2.01372732s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180787 -n no-preload-180787
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (362.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zsxjd" [a159ec91-2534-43d2-9ad5-4dd279069222] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023569249s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zsxjd" [a159ec91-2534-43d2-9ad5-4dd279069222] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007464519s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-180787 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-180787 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-180787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180787 -n no-preload-180787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180787 -n no-preload-180787: exit status 2 (367.22381ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180787 -n no-preload-180787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180787 -n no-preload-180787: exit status 2 (340.281951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-180787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180787 -n no-preload-180787
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180787 -n no-preload-180787
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-519935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:14:37.605446    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 17:15:08.386646    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-519935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (59.890571797s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519935 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a6dfca43-b622-47b8-b2a8-5ee0e4cd9472] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a6dfca43-b622-47b8-b2a8-5ee0e4cd9472] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.031020601s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-519935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-519935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-519935 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-519935 --alsologtostderr -v=3: (12.187663344s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519935 -n embed-certs-519935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519935 -n embed-certs-519935: exit status 7 (75.572742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-519935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (351.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-519935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:17:38.756254    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 17:17:40.560155    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.566121    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.576439    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.596703    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.637062    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.717386    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:40.877744    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:41.198676    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:41.839532    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:43.119750    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:45.679974    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:17:50.800827    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:18:01.041105    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-519935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (5m50.676756267s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519935 -n embed-certs-519935
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (351.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hlkcs" [7bcf6671-5f5f-4cd9-8713-0f9aaab92e85] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023737796s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hlkcs" [7bcf6671-5f5f-4cd9-8713-0f9aaab92e85] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005975095s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-781573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-781573 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-781573 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-781573 -n old-k8s-version-781573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-781573 -n old-k8s-version-781573: exit status 2 (375.890694ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-781573 -n old-k8s-version-781573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-781573 -n old-k8s-version-781573: exit status 2 (378.56871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-781573 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-781573 -n old-k8s-version-781573
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-781573 -n old-k8s-version-781573
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-389434 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:19:02.482314    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:19:20.652004    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-389434 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m5.899800693s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-389434 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19ec1674-20f0-4bfe-b58f-d939f06ff746] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [19ec1674-20f0-4bfe-b58f-d939f06ff746] Running
E0610 17:19:37.605457    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.028161808s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-389434 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-389434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-389434 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-389434 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-389434 --alsologtostderr -v=3: (12.153633072s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434: exit status 7 (70.441417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-389434 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-389434 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:20:08.386121    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 17:20:24.403394    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:21:19.651315    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.656633    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.666845    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.687172    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.727514    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.807695    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:19.968106    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:20.289154    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:20.930318    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:22.210665    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:24.771159    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:21:29.891986    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-389434 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (5m45.477678554s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-54zrm" [7e307445-052b-4cd8-b82e-457a6c70b6bb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0610 17:21:40.132290    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-54zrm" [7e307445-052b-4cd8-b82e-457a6c70b6bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.024772645s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-54zrm" [7e307445-052b-4cd8-b82e-457a6c70b6bb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007507007s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-519935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-519935 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-519935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519935 -n embed-certs-519935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519935 -n embed-certs-519935: exit status 2 (355.472215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519935 -n embed-certs-519935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519935 -n embed-certs-519935: exit status 2 (363.405329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-519935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519935 -n embed-certs-519935
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519935 -n embed-certs-519935
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-812986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:22:00.613698    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:22:38.756698    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
E0610 17:22:40.560457    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
E0610 17:22:41.574277    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-812986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (44.627782306s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-812986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-812986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052560386s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-812986 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-812986 --alsologtostderr -v=3: (1.299710216s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812986 -n newest-cni-812986
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812986 -n newest-cni-812986: exit status 7 (67.275941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-812986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-812986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0610 17:23:08.244029    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-812986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (40.457930794s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-812986 -n newest-cni-812986
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-812986 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-812986 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812986 -n newest-cni-812986
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812986 -n newest-cni-812986: exit status 2 (362.000509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812986 -n newest-cni-812986
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812986 -n newest-cni-812986: exit status 2 (366.03554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-812986 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-812986 -n newest-cni-812986
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-812986 -n newest-cni-812986
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p auto-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0610 17:24:03.495330    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:24:37.605833    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/addons-048679/client.crt: no such file or directory
E0610 17:24:51.459826    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p auto-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m27.777501922s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vb875" [69e4ef26-e1d5-496c-a1b2-194175703fe2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vb875" [69e4ef26-e1d5-496c-a1b2-194175703fe2] Running
E0610 17:25:08.386461    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.02311332s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gzw7" [5d2df776-cb57-4e26-b6ab-e2e4177bfad1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gzw7" [5d2df776-cb57-4e26-b6ab-e2e4177bfad1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.042969534s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m3.303297268s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gzw7" [5d2df776-cb57-4e26-b6ab-e2e4177bfad1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010241876s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-389434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-389434 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-389434 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-389434 --alsologtostderr -v=1: (1.124425505s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434: exit status 2 (484.444882ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434: exit status 2 (488.950097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-389434 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-389434 --alsologtostderr -v=1: (1.076869639s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-389434 -n default-k8s-diff-port-389434
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.84s)
E0610 17:31:19.651334    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
E0610 17:31:24.935382    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p calico-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0610 17:26:19.650484    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p calico-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m24.130419074s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mjqhs" [3fdba6cd-6582-4589-96af-dd80c1562a68] Running
E0610 17:26:47.336193    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/old-k8s-version-781573/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029053734s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wz7md" [392ef773-83af-452b-9596-f50b1ce2b543] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wz7md" [392ef773-83af-452b-9596-f50b1ce2b543] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.026012388s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m17.042868436s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d4tgx" [5a1797b8-19c9-4baf-8209-90605eeedaa5] Running
E0610 17:27:38.756061    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/ingress-addon-legacy-879929/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026441337s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-019039 "pgrep -a kubelet"
E0610 17:27:40.560942    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/no-preload-180787/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2bw22" [4429597d-e809-4c4a-9acb-922685612763] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2bw22" [4429597d-e809-4c4a-9acb-922685612763] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00855705s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (57.589950024s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-gwddc" [6d86fc3b-b711-4354-90e0-2809dcd640ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-gwddc" [6d86fc3b-b711-4354-90e0-2809dcd640ac] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.01489574s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p flannel-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.474193297s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-77pws" [51ad58bc-8bce-49ee-89ae-b100840b6153] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-77pws" [51ad58bc-8bce-49ee-89ae-b100840b6153] Running
E0610 17:29:30.447900    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.453169    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.463411    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.483656    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.523904    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.604062    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:30.764569    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:31.085475    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:29:31.726647    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.012510273s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0610 17:29:33.007076    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0610 17:30:03.011318    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.018548    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.028910    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.049330    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.089565    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.169827    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.330818    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:03.651093    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:04.291843    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:05.572332    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:08.132781    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
E0610 17:30:08.386258    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/functional-351441/client.crt: no such file or directory
E0610 17:30:11.408844    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/default-k8s-diff-port-389434/client.crt: no such file or directory
E0610 17:30:13.253217    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p bridge-019039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.629287396s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gwxzf" [73610d63-b26a-4201-8c84-731c35c4c416] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.033813589s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kxcwz" [62c41ad3-e01f-4ddd-a5ab-c9d213534849] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 17:30:23.493766    7526 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/auto-019039/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-kxcwz" [62c41ad3-e01f-4ddd-a5ab-c9d213534849] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.017161722s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-019039 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-019039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-6qn9f" [66b18333-6266-4c28-95dd-34a79bcd58c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-6qn9f" [66b18333-6266-4c28-95dd-34a79bcd58c2] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.007230679s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-019039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-019039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-637757 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-637757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-637757
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-576287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-576287
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-019039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-019039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019039"

                                                
                                                
----------------------- debugLogs end: kubenet-019039 [took: 5.390775319s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-019039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-019039
--- SKIP: TestNetworkPlugins/group/kubenet (5.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-019039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-019039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16578-2220/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jun 2023 17:02:13 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-flag-081066
contexts:
- context:
cluster: force-systemd-flag-081066
extensions:
- extension:
last-update: Sat, 10 Jun 2023 17:02:13 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: force-systemd-flag-081066
name: force-systemd-flag-081066
current-context: force-systemd-flag-081066
kind: Config
preferences: {}
users:
- name: force-systemd-flag-081066
user:
client-certificate: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/force-systemd-flag-081066/client.crt
client-key: /home/jenkins/minikube-integration/16578-2220/.minikube/profiles/force-systemd-flag-081066/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-019039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-019039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019039"

                                                
                                                
----------------------- debugLogs end: cilium-019039 [took: 5.315051308s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-019039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-019039
--- SKIP: TestNetworkPlugins/group/cilium (5.52s)

                                                
                                    
Copied to clipboard