Test Report: Docker_Linux_crio 19476

                    
                      5d2be5ad06c5c8c1678cb56a2620c3837d13735d:2024-08-19:35852
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 154.47
36 TestAddons/parallel/MetricsServer 349.02
x
+
TestAddons/parallel/Ingress (154.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-454931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-454931 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-454931 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c8625615-ce92-4bb3-8d0f-5725da7e75de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c8625615-ce92-4bb3-8d0f-5725da7e75de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003824695s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.453922809s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-454931 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 addons disable ingress-dns --alsologtostderr -v=1: (1.320260544s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 addons disable ingress --alsologtostderr -v=1: (7.646416802s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-454931
helpers_test.go:235: (dbg) docker inspect addons-454931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75",
	        "Created": "2024-08-19T10:49:12.63298428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T10:49:12.765700387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/hosts",
	        "LogPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75-json.log",
	        "Name": "/addons-454931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-454931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-454931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c-init/diff:/var/lib/docker/overlay2/fa7200b92f30b05c6ff80b9438668c67d163f11b4c83e2bafd3c170c7f60ea40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-454931",
	                "Source": "/var/lib/docker/volumes/addons-454931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-454931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-454931",
	                "name.minikube.sigs.k8s.io": "addons-454931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1600eb3e74b17b8c11dddf19fc52757a4a16a1141749e25aea91d3fae69cb7be",
	            "SandboxKey": "/var/run/docker/netns/1600eb3e74b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-454931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f80651d11008c8dc4bf10db3eedb33a79b04c57ebb24d7f95b0f6e3807438d87",
	                    "EndpointID": "f07c36fb1b203e6347481cac6cb7b8d0f62787aed26dd282784cb93c0c11c71a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-454931",
	                        "9ced8e49789d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-454931 -n addons-454931
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 logs -n 25: (1.195430078s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-492817 | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | download-docker-492817                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-492817                                                                   | download-docker-492817 | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-843469   | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | binary-mirror-843469                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33413                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-843469                                                                     | binary-mirror-843469   | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-454931 --wait=true                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-454931 ssh cat                                                                       | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | /opt/local-path-provisioner/pvc-6f8c5a14-e9d6-473e-8f6f-d18080db96da_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| ip      | addons-454931 ip                                                                            | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | -p addons-454931                                                                            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | -p addons-454931                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-454931 ssh curl -s                                                                   | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-454931 addons                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-454931 ip                                                                            | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:48:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:48:50.161206   17853 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:48:50.161479   17853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:50.161488   17853 out.go:358] Setting ErrFile to fd 2...
	I0819 10:48:50.161493   17853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:50.161716   17853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 10:48:50.162371   17853 out.go:352] Setting JSON to false
	I0819 10:48:50.163154   17853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1870,"bootTime":1724062660,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:48:50.163212   17853 start.go:139] virtualization: kvm guest
	I0819 10:48:50.165454   17853 out.go:177] * [addons-454931] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 10:48:50.166710   17853 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 10:48:50.166747   17853 notify.go:220] Checking for updates...
	I0819 10:48:50.169301   17853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:48:50.170562   17853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:48:50.171735   17853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 10:48:50.172965   17853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 10:48:50.174057   17853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:48:50.175322   17853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:48:50.196925   17853 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 10:48:50.197064   17853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:50.246716   17853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:50.237681928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:50.246819   17853 docker.go:307] overlay module found
	I0819 10:48:50.248604   17853 out.go:177] * Using the docker driver based on user configuration
	I0819 10:48:50.249841   17853 start.go:297] selected driver: docker
	I0819 10:48:50.249863   17853 start.go:901] validating driver "docker" against <nil>
	I0819 10:48:50.249874   17853 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:48:50.250607   17853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:50.297381   17853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:50.288584744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:50.297532   17853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:48:50.297776   17853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:48:50.299259   17853 out.go:177] * Using Docker driver with root privileges
	I0819 10:48:50.300409   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:48:50.300426   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:48:50.300440   17853 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:48:50.300512   17853 start.go:340] cluster config:
	{Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:48:50.301792   17853 out.go:177] * Starting "addons-454931" primary control-plane node in "addons-454931" cluster
	I0819 10:48:50.303131   17853 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 10:48:50.304324   17853 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 10:48:50.305663   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:48:50.305699   17853 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:48:50.305710   17853 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:50.305749   17853 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 10:48:50.305794   17853 preload.go:172] Found /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 10:48:50.305806   17853 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 10:48:50.306098   17853 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json ...
	I0819 10:48:50.306123   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json: {Name:mk3c980c39a9d2b1e735137a0236c438a7a88525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:50.321402   17853 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 10:48:50.321533   17853 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 10:48:50.321550   17853 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 10:48:50.321556   17853 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 10:48:50.321570   17853 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 10:48:50.321581   17853 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 10:49:02.642652   17853 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 10:49:02.642684   17853 cache.go:194] Successfully downloaded all kic artifacts
	I0819 10:49:02.642726   17853 start.go:360] acquireMachinesLock for addons-454931: {Name:mkabded988b43486bb8e374098ad1d731f0bf562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:49:02.642843   17853 start.go:364] duration metric: took 99.225µs to acquireMachinesLock for "addons-454931"
	I0819 10:49:02.642867   17853 start.go:93] Provisioning new machine with config: &{Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:49:02.642947   17853 start.go:125] createHost starting for "" (driver="docker")
	I0819 10:49:02.648503   17853 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 10:49:02.648740   17853 start.go:159] libmachine.API.Create for "addons-454931" (driver="docker")
	I0819 10:49:02.648768   17853 client.go:168] LocalClient.Create starting
	I0819 10:49:02.648864   17853 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem
	I0819 10:49:02.705200   17853 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem
	I0819 10:49:02.997153   17853 cli_runner.go:164] Run: docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 10:49:03.013106   17853 cli_runner.go:211] docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 10:49:03.013193   17853 network_create.go:284] running [docker network inspect addons-454931] to gather additional debugging logs...
	I0819 10:49:03.013216   17853 cli_runner.go:164] Run: docker network inspect addons-454931
	W0819 10:49:03.029224   17853 cli_runner.go:211] docker network inspect addons-454931 returned with exit code 1
	I0819 10:49:03.029253   17853 network_create.go:287] error running [docker network inspect addons-454931]: docker network inspect addons-454931: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-454931 not found
	I0819 10:49:03.029264   17853 network_create.go:289] output of [docker network inspect addons-454931]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-454931 not found
	
	** /stderr **
	I0819 10:49:03.029378   17853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 10:49:03.045263   17853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a572c0}
	I0819 10:49:03.045308   17853 network_create.go:124] attempt to create docker network addons-454931 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 10:49:03.045356   17853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-454931 addons-454931
	I0819 10:49:03.107467   17853 network_create.go:108] docker network addons-454931 192.168.49.0/24 created
	I0819 10:49:03.107498   17853 kic.go:121] calculated static IP "192.168.49.2" for the "addons-454931" container
	I0819 10:49:03.107561   17853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 10:49:03.122899   17853 cli_runner.go:164] Run: docker volume create addons-454931 --label name.minikube.sigs.k8s.io=addons-454931 --label created_by.minikube.sigs.k8s.io=true
	I0819 10:49:03.140217   17853 oci.go:103] Successfully created a docker volume addons-454931
	I0819 10:49:03.140294   17853 cli_runner.go:164] Run: docker run --rm --name addons-454931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --entrypoint /usr/bin/test -v addons-454931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 10:49:08.057033   17853 cli_runner.go:217] Completed: docker run --rm --name addons-454931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --entrypoint /usr/bin/test -v addons-454931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (4.916691947s)
	I0819 10:49:08.057063   17853 oci.go:107] Successfully prepared a docker volume addons-454931
	I0819 10:49:08.057083   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:49:08.057117   17853 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 10:49:08.057189   17853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 10:49:12.570941   17853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.513691138s)
	I0819 10:49:12.570978   17853 kic.go:203] duration metric: took 4.513868843s to extract preloaded images to volume ...
	W0819 10:49:12.571107   17853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 10:49:12.571193   17853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 10:49:12.618523   17853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-454931 --name addons-454931 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-454931 --network addons-454931 --ip 192.168.49.2 --volume addons-454931:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 10:49:12.928131   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Running}}
	I0819 10:49:12.945615   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:12.962954   17853 cli_runner.go:164] Run: docker exec addons-454931 stat /var/lib/dpkg/alternatives/iptables
	I0819 10:49:13.004581   17853 oci.go:144] the created container "addons-454931" has a running status.
	I0819 10:49:13.004618   17853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa...
	I0819 10:49:13.066647   17853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 10:49:13.086843   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:13.103329   17853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 10:49:13.103351   17853 kic_runner.go:114] Args: [docker exec --privileged addons-454931 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 10:49:13.144531   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:13.164563   17853 machine.go:93] provisionDockerMachine start ...
	I0819 10:49:13.164642   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:13.186723   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:13.186946   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:13.186960   17853 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:49:13.187603   17853 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48226->127.0.0.1:32768: read: connection reset by peer
	I0819 10:49:16.305065   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-454931
	
	I0819 10:49:16.305090   17853 ubuntu.go:169] provisioning hostname "addons-454931"
	I0819 10:49:16.305149   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.322424   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.322598   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.322612   17853 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-454931 && echo "addons-454931" | sudo tee /etc/hostname
	I0819 10:49:16.448802   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-454931
	
	I0819 10:49:16.448864   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.467862   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.468028   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.468044   17853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-454931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-454931/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-454931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:49:16.589625   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:49:16.589677   17853 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19476-9624/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-9624/.minikube}
	I0819 10:49:16.589699   17853 ubuntu.go:177] setting up certificates
	I0819 10:49:16.589709   17853 provision.go:84] configureAuth start
	I0819 10:49:16.589760   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:16.608983   17853 provision.go:143] copyHostCerts
	I0819 10:49:16.609066   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/ca.pem (1082 bytes)
	I0819 10:49:16.609177   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/cert.pem (1123 bytes)
	I0819 10:49:16.609237   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/key.pem (1679 bytes)
	I0819 10:49:16.609283   17853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem org=jenkins.addons-454931 san=[127.0.0.1 192.168.49.2 addons-454931 localhost minikube]
	I0819 10:49:16.701989   17853 provision.go:177] copyRemoteCerts
	I0819 10:49:16.702045   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:49:16.702076   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.719028   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:16.805893   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:49:16.827909   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:49:16.849709   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:49:16.871916   17853 provision.go:87] duration metric: took 282.195712ms to configureAuth
	I0819 10:49:16.871946   17853 ubuntu.go:193] setting minikube options for container-runtime
	I0819 10:49:16.872111   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:16.872214   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.888761   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.888915   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.888929   17853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 10:49:17.095244   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 10:49:17.095268   17853 machine.go:96] duration metric: took 3.930681597s to provisionDockerMachine
	I0819 10:49:17.095279   17853 client.go:171] duration metric: took 14.446505109s to LocalClient.Create
	I0819 10:49:17.095298   17853 start.go:167] duration metric: took 14.446561239s to libmachine.API.Create "addons-454931"
	I0819 10:49:17.095312   17853 start.go:293] postStartSetup for "addons-454931" (driver="docker")
	I0819 10:49:17.095322   17853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:49:17.095382   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:49:17.095415   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.112794   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.202240   17853 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:49:17.205476   17853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 10:49:17.205510   17853 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 10:49:17.205518   17853 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 10:49:17.205527   17853 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 10:49:17.205541   17853 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-9624/.minikube/addons for local assets ...
	I0819 10:49:17.205598   17853 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-9624/.minikube/files for local assets ...
	I0819 10:49:17.205621   17853 start.go:296] duration metric: took 110.304672ms for postStartSetup
	I0819 10:49:17.205925   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:17.222314   17853 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json ...
	I0819 10:49:17.222560   17853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:49:17.222611   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.239054   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.322313   17853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 10:49:17.326437   17853 start.go:128] duration metric: took 14.683475473s to createHost
	I0819 10:49:17.326464   17853 start.go:83] releasing machines lock for "addons-454931", held for 14.683609595s
	I0819 10:49:17.326527   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:17.343280   17853 ssh_runner.go:195] Run: cat /version.json
	I0819 10:49:17.343326   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.343396   17853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:49:17.343469   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.361821   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.363094   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.445191   17853 ssh_runner.go:195] Run: systemctl --version
	I0819 10:49:17.535776   17853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 10:49:17.670779   17853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:49:17.674691   17853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:49:17.691738   17853 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 10:49:17.691805   17853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:49:17.717461   17853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 10:49:17.717482   17853 start.go:495] detecting cgroup driver to use...
	I0819 10:49:17.717510   17853 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 10:49:17.717552   17853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:49:17.731446   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:49:17.741948   17853 docker.go:217] disabling cri-docker service (if available) ...
	I0819 10:49:17.742007   17853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 10:49:17.754575   17853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 10:49:17.767074   17853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 10:49:17.846360   17853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 10:49:17.926806   17853 docker.go:233] disabling docker service ...
	I0819 10:49:17.926864   17853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 10:49:17.943462   17853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 10:49:17.954046   17853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 10:49:18.028523   17853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 10:49:18.107524   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 10:49:18.118027   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:49:18.133535   17853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 10:49:18.133600   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.142715   17853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 10:49:18.142771   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.152104   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.161245   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.170270   17853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:49:18.178227   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.187053   17853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.201120   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.210546   17853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:49:18.218431   17853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:49:18.226302   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:18.306342   17853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 10:49:18.400066   17853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 10:49:18.400134   17853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 10:49:18.403489   17853 start.go:563] Will wait 60s for crictl version
	I0819 10:49:18.403551   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:49:18.406626   17853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:49:18.439196   17853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 10:49:18.439293   17853 ssh_runner.go:195] Run: crio --version
	I0819 10:49:18.472753   17853 ssh_runner.go:195] Run: crio --version
	I0819 10:49:18.510707   17853 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 10:49:18.512095   17853 cli_runner.go:164] Run: docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 10:49:18.529015   17853 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 10:49:18.532585   17853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:49:18.542912   17853 kubeadm.go:883] updating cluster {Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:49:18.543046   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:49:18.543108   17853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:49:18.605785   17853 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 10:49:18.605810   17853 crio.go:433] Images already preloaded, skipping extraction
	I0819 10:49:18.605863   17853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:49:18.637324   17853 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 10:49:18.637349   17853 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:49:18.637357   17853 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 10:49:18.637454   17853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-454931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:49:18.637515   17853 ssh_runner.go:195] Run: crio config
	I0819 10:49:18.679671   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:49:18.679694   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:49:18.679706   17853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:49:18.679733   17853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-454931 NodeName:addons-454931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:49:18.679869   17853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-454931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:49:18.679925   17853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:49:18.687843   17853 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:49:18.687903   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 10:49:18.695343   17853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 10:49:18.711270   17853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:49:18.728434   17853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 10:49:18.745124   17853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 10:49:18.748458   17853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:49:18.758830   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:18.830101   17853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:49:18.842882   17853 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931 for IP: 192.168.49.2
	I0819 10:49:18.842913   17853 certs.go:194] generating shared ca certs ...
	I0819 10:49:18.842933   17853 certs.go:226] acquiring lock for ca certs: {Name:mk48fd67c854a9bf925bf664f1df64b0d0b4b6de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.843057   17853 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key
	I0819 10:49:18.961901   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt ...
	I0819 10:49:18.961935   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt: {Name:mkc761c5afb6179bb50a06240c218cbbe834c8c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.962102   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key ...
	I0819 10:49:18.962113   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key: {Name:mk4046bc8960e7e057b5e1ebdc87ccbaa32a3d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.962183   17853 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key
	I0819 10:49:19.135757   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt ...
	I0819 10:49:19.135787   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt: {Name:mkf9aa29e8bda76d7d88fcbbc0888bf849fca9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.135941   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key ...
	I0819 10:49:19.135951   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key: {Name:mk0cad2e13f88585bf00aaffca31c72edb515c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.136015   17853 certs.go:256] generating profile certs ...
	I0819 10:49:19.136069   17853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key
	I0819 10:49:19.136084   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt with IP's: []
	I0819 10:49:19.193754   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt ...
	I0819 10:49:19.193788   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: {Name:mkbc2b63e57cbe75f518a16c0ee9d186632674dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.193961   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key ...
	I0819 10:49:19.193973   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key: {Name:mk5558469fc09392c82d41d3442a677656aeff7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.194057   17853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190
	I0819 10:49:19.194078   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 10:49:19.313968   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 ...
	I0819 10:49:19.313999   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190: {Name:mkff3fe531f5c3cd481e431a22a5c83a62be088e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.314168   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190 ...
	I0819 10:49:19.314182   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190: {Name:mkbe31936aac126a7f0346838926215893f0d8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.314251   17853 certs.go:381] copying /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 -> /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt
	I0819 10:49:19.314319   17853 certs.go:385] copying /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190 -> /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key
	I0819 10:49:19.314364   17853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key
	I0819 10:49:19.314380   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt with IP's: []
	I0819 10:49:19.423062   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt ...
	I0819 10:49:19.423092   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt: {Name:mk655c9d95fbfb730e1315e8ac055f617ce08e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.423244   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key ...
	I0819 10:49:19.423254   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key: {Name:mka4c73dffc73321d54f0c5421c73ff065a9d0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.423411   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:49:19.423447   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:49:19.423471   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:49:19.423494   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem (1679 bytes)
	I0819 10:49:19.424091   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:49:19.446828   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:49:19.467453   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:49:19.489688   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 10:49:19.511929   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 10:49:19.534249   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:49:19.558476   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:49:19.580188   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:49:19.601770   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:49:19.623799   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:49:19.640747   17853 ssh_runner.go:195] Run: openssl version
	I0819 10:49:19.645867   17853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:49:19.654554   17853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.657876   17853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.657934   17853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.664234   17853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:49:19.672838   17853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:49:19.676060   17853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:49:19.676119   17853 kubeadm.go:392] StartCluster: {Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:49:19.676193   17853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 10:49:19.676235   17853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 10:49:19.707766   17853 cri.go:89] found id: ""
	I0819 10:49:19.707836   17853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:49:19.715505   17853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:49:19.723310   17853 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 10:49:19.723364   17853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:49:19.731302   17853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:49:19.731319   17853 kubeadm.go:157] found existing configuration files:
	
	I0819 10:49:19.731359   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:49:19.739239   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:49:19.739291   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:49:19.747064   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:49:19.754814   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:49:19.754863   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:49:19.762429   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:49:19.770186   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:49:19.770252   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:49:19.778808   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:49:19.786630   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:49:19.786680   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:49:19.794074   17853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 10:49:19.829715   17853 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:49:19.829810   17853 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:49:19.845029   17853 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 10:49:19.845110   17853 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 10:49:19.845168   17853 kubeadm.go:310] OS: Linux
	I0819 10:49:19.845225   17853 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 10:49:19.845300   17853 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 10:49:19.845368   17853 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 10:49:19.845450   17853 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 10:49:19.845496   17853 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 10:49:19.845580   17853 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 10:49:19.845676   17853 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 10:49:19.845730   17853 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 10:49:19.845779   17853 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 10:49:19.893054   17853 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:49:19.893177   17853 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:49:19.893275   17853 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:49:19.899154   17853 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:49:19.902076   17853 out.go:235]   - Generating certificates and keys ...
	I0819 10:49:19.902193   17853 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:49:19.902283   17853 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:49:20.090446   17853 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:49:20.288823   17853 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:49:20.596534   17853 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:49:20.864187   17853 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:49:21.037487   17853 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:49:21.037670   17853 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-454931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 10:49:21.211151   17853 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:49:21.211284   17853 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-454931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 10:49:21.288446   17853 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:49:21.482035   17853 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:49:21.615953   17853 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:49:21.616032   17853 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:49:21.961107   17853 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:49:22.049138   17853 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:49:22.119968   17853 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:49:22.221399   17853 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:49:22.341876   17853 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:49:22.342398   17853 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:49:22.344934   17853 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:49:22.347037   17853 out.go:235]   - Booting up control plane ...
	I0819 10:49:22.347142   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:49:22.347291   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:49:22.347432   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:49:22.356614   17853 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:49:22.361734   17853 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:49:22.361810   17853 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:49:22.443967   17853 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:49:22.444141   17853 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:49:22.945559   17853 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.647834ms
	I0819 10:49:22.945683   17853 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:49:27.447693   17853 kubeadm.go:310] [api-check] The API server is healthy after 4.502072204s
	I0819 10:49:27.458316   17853 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:49:27.471421   17853 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:49:27.489735   17853 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:49:27.489946   17853 kubeadm.go:310] [mark-control-plane] Marking the node addons-454931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:49:27.496877   17853 kubeadm.go:310] [bootstrap-token] Using token: drl235.cjwdnkfrhgh3xdmw
	I0819 10:49:27.498347   17853 out.go:235]   - Configuring RBAC rules ...
	I0819 10:49:27.498495   17853 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:49:27.501602   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:49:27.508849   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:49:27.511182   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:49:27.514009   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:49:27.516213   17853 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:49:27.854495   17853 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:49:28.274928   17853 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:49:28.853496   17853 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:49:28.854362   17853 kubeadm.go:310] 
	I0819 10:49:28.854421   17853 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:49:28.854432   17853 kubeadm.go:310] 
	I0819 10:49:28.854507   17853 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:49:28.854523   17853 kubeadm.go:310] 
	I0819 10:49:28.854553   17853 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:49:28.854607   17853 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:49:28.854649   17853 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:49:28.854660   17853 kubeadm.go:310] 
	I0819 10:49:28.854701   17853 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:49:28.854707   17853 kubeadm.go:310] 
	I0819 10:49:28.854748   17853 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:49:28.854754   17853 kubeadm.go:310] 
	I0819 10:49:28.854793   17853 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:49:28.854856   17853 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:49:28.854916   17853 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:49:28.854923   17853 kubeadm.go:310] 
	I0819 10:49:28.854990   17853 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:49:28.855084   17853 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:49:28.855106   17853 kubeadm.go:310] 
	I0819 10:49:28.855223   17853 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token drl235.cjwdnkfrhgh3xdmw \
	I0819 10:49:28.855323   17853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7ac81bd34d5e0dd4c745e6e1049376f9105cbd830050f6d1cbc53a7018b4d10a \
	I0819 10:49:28.855348   17853 kubeadm.go:310] 	--control-plane 
	I0819 10:49:28.855355   17853 kubeadm.go:310] 
	I0819 10:49:28.855425   17853 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:49:28.855433   17853 kubeadm.go:310] 
	I0819 10:49:28.855508   17853 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token drl235.cjwdnkfrhgh3xdmw \
	I0819 10:49:28.855599   17853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7ac81bd34d5e0dd4c745e6e1049376f9105cbd830050f6d1cbc53a7018b4d10a 
	I0819 10:49:28.857649   17853 kubeadm.go:310] W0819 10:49:19.827206    1293 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:49:28.857929   17853 kubeadm.go:310] W0819 10:49:19.827792    1293 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:49:28.858111   17853 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 10:49:28.858200   17853 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:49:28.858223   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:49:28.858230   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:49:28.860078   17853 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:49:28.861210   17853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:49:28.865230   17853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:49:28.865245   17853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:49:28.882605   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:49:29.077337   17853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:49:29.077400   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:29.077448   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-454931 minikube.k8s.io/updated_at=2024_08_19T10_49_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=addons-454931 minikube.k8s.io/primary=true
	I0819 10:49:29.085752   17853 ops.go:34] apiserver oom_adj: -16
	I0819 10:49:29.172000   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:29.672271   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:30.172383   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:30.672160   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:31.172075   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:31.672372   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.172709   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.673044   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.737833   17853 kubeadm.go:1113] duration metric: took 3.660490147s to wait for elevateKubeSystemPrivileges
	I0819 10:49:32.737867   17853 kubeadm.go:394] duration metric: took 13.061753483s to StartCluster
	I0819 10:49:32.737883   17853 settings.go:142] acquiring lock: {Name:mka0415b2b44df4b87df0b554c885fde1a08273f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:32.737982   17853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:49:32.738313   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/kubeconfig: {Name:mk5e1f8a598926e7f378554b3f9ff1e342d2d455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:32.738487   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:49:32.738507   17853 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:49:32.738589   17853 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 10:49:32.738685   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:32.738693   17853 addons.go:69] Setting helm-tiller=true in profile "addons-454931"
	I0819 10:49:32.738710   17853 addons.go:69] Setting volumesnapshots=true in profile "addons-454931"
	I0819 10:49:32.738709   17853 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-454931"
	I0819 10:49:32.738730   17853 addons.go:234] Setting addon helm-tiller=true in "addons-454931"
	I0819 10:49:32.738690   17853 addons.go:69] Setting yakd=true in profile "addons-454931"
	I0819 10:49:32.738735   17853 addons.go:234] Setting addon volumesnapshots=true in "addons-454931"
	I0819 10:49:32.738738   17853 addons.go:69] Setting registry=true in profile "addons-454931"
	I0819 10:49:32.738750   17853 addons.go:234] Setting addon yakd=true in "addons-454931"
	I0819 10:49:32.738805   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738702   17853 addons.go:69] Setting volcano=true in profile "addons-454931"
	I0819 10:49:32.738892   17853 addons.go:234] Setting addon volcano=true in "addons-454931"
	I0819 10:49:32.738931   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738717   17853 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-454931"
	I0819 10:49:32.739021   17853 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-454931"
	I0819 10:49:32.738756   17853 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-454931"
	I0819 10:49:32.739106   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738757   17853 addons.go:234] Setting addon registry=true in "addons-454931"
	I0819 10:49:32.739211   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739301   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738739   17853 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-454931"
	I0819 10:49:32.739392   17853 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-454931"
	I0819 10:49:32.738765   17853 addons.go:69] Setting storage-provisioner=true in profile "addons-454931"
	I0819 10:49:32.739415   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739419   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739429   17853 addons.go:234] Setting addon storage-provisioner=true in "addons-454931"
	I0819 10:49:32.739451   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739528   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739624   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739854   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739908   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738765   17853 addons.go:69] Setting cloud-spanner=true in profile "addons-454931"
	I0819 10:49:32.740210   17853 addons.go:234] Setting addon cloud-spanner=true in "addons-454931"
	I0819 10:49:32.740238   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738767   17853 addons.go:69] Setting ingress=true in profile "addons-454931"
	I0819 10:49:32.740349   17853 addons.go:234] Setting addon ingress=true in "addons-454931"
	I0819 10:49:32.740394   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.740697   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.740953   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738769   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.741266   17853 out.go:177] * Verifying Kubernetes components...
	I0819 10:49:32.738769   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.742041   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738775   17853 addons.go:69] Setting default-storageclass=true in profile "addons-454931"
	I0819 10:49:32.742665   17853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-454931"
	I0819 10:49:32.742821   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:32.738776   17853 addons.go:69] Setting gcp-auth=true in profile "addons-454931"
	I0819 10:49:32.743010   17853 mustload.go:65] Loading cluster: addons-454931
	I0819 10:49:32.743189   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:32.743350   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.743424   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.743717   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738777   17853 addons.go:69] Setting ingress-dns=true in profile "addons-454931"
	I0819 10:49:32.746958   17853 addons.go:234] Setting addon ingress-dns=true in "addons-454931"
	I0819 10:49:32.747035   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738778   17853 addons.go:69] Setting inspektor-gadget=true in profile "addons-454931"
	I0819 10:49:32.747312   17853 addons.go:234] Setting addon inspektor-gadget=true in "addons-454931"
	I0819 10:49:32.747356   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.747925   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738781   17853 addons.go:69] Setting metrics-server=true in profile "addons-454931"
	I0819 10:49:32.752833   17853 addons.go:234] Setting addon metrics-server=true in "addons-454931"
	I0819 10:49:32.752906   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.753430   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739336   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.776446   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	W0819 10:49:32.781962   17853 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 10:49:32.788438   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 10:49:32.800719   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 10:49:32.803818   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.803872   17853 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-454931"
	I0819 10:49:32.803933   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.804401   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.804576   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 10:49:32.804661   17853 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 10:49:32.804686   17853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:49:32.804703   17853 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 10:49:32.818961   17853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:49:32.819013   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:49:32.819069   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.817195   17853 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 10:49:32.819141   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 10:49:32.819204   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.820908   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 10:49:32.820949   17853 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 10:49:32.820911   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 10:49:32.821003   17853 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 10:49:32.821497   17853 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 10:49:32.821093   17853 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:49:32.822239   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 10:49:32.822298   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.823084   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:32.823138   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 10:49:32.823157   17853 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 10:49:32.823240   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.824541   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 10:49:32.824656   17853 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 10:49:32.824818   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 10:49:32.824849   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 10:49:32.824935   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.825519   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:32.825846   17853 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 10:49:32.825863   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 10:49:32.825910   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.827023   17853 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:49:32.827040   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 10:49:32.827272   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.827496   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 10:49:32.828152   17853 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 10:49:32.829240   17853 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:49:32.829254   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 10:49:32.829319   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.830965   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 10:49:32.832438   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 10:49:32.833740   17853 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 10:49:32.836984   17853 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 10:49:32.837006   17853 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 10:49:32.837074   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.837613   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 10:49:32.837650   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 10:49:32.837700   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.837725   17853 addons.go:234] Setting addon default-storageclass=true in "addons-454931"
	I0819 10:49:32.837763   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.838236   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.841013   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 10:49:32.842093   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 10:49:32.842116   17853 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 10:49:32.842188   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.871660   17853 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 10:49:32.872826   17853 out.go:177]   - Using image docker.io/busybox:stable
	I0819 10:49:32.873925   17853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:49:32.873950   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 10:49:32.874008   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.878762   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:49:32.881465   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.885684   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.892993   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.894643   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.899663   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.905896   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.907691   17853 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 10:49:32.908831   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 10:49:32.908860   17853 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 10:49:32.908931   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.910373   17853 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:49:32.910403   17853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:49:32.910455   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.913144   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.915750   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.916321   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.917187   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.920051   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.924893   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.933176   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.934129   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	W0819 10:49:32.958908   17853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 10:49:32.958943   17853 retry.go:31] will retry after 294.032151ms: ssh: handshake failed: EOF
	W0819 10:49:32.958994   17853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 10:49:32.959016   17853 retry.go:31] will retry after 328.164025ms: ssh: handshake failed: EOF
	I0819 10:49:32.967023   17853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:49:33.355168   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 10:49:33.355217   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 10:49:33.369962   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 10:49:33.370047   17853 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 10:49:33.375199   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:49:33.454377   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 10:49:33.454517   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:49:33.455666   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 10:49:33.455735   17853 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 10:49:33.456448   17853 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 10:49:33.456500   17853 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 10:49:33.468918   17853 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 10:49:33.469004   17853 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 10:49:33.479312   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 10:49:33.479367   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 10:49:33.566488   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 10:49:33.566566   17853 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 10:49:33.570034   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 10:49:33.570114   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 10:49:33.571163   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:49:33.574797   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 10:49:33.574826   17853 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 10:49:33.575268   17853 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 10:49:33.575290   17853 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 10:49:33.655187   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:49:33.659896   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:49:33.659926   17853 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 10:49:33.755220   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 10:49:33.755251   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 10:49:33.757082   17853 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:49:33.757106   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 10:49:33.765747   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 10:49:33.765782   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 10:49:33.858176   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:49:33.858203   17853 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 10:49:33.874640   17853 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 10:49:33.874670   17853 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 10:49:33.955886   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:49:33.956814   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:49:34.054264   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 10:49:34.054293   17853 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 10:49:34.055839   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:49:34.056304   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:49:34.066549   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 10:49:34.066592   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 10:49:34.077710   17853 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 10:49:34.077743   17853 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 10:49:34.165193   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 10:49:34.165224   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 10:49:34.354270   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:49:34.355349   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 10:49:34.355431   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 10:49:34.455229   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:49:34.455276   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 10:49:34.459874   17853 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 10:49:34.459956   17853 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 10:49:34.461960   17853 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.583164802s)
	I0819 10:49:34.462028   17853 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 10:49:34.463125   17853 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.496073021s)
	I0819 10:49:34.464022   17853 node_ready.go:35] waiting up to 6m0s for node "addons-454931" to be "Ready" ...
	I0819 10:49:34.564334   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 10:49:34.564422   17853 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 10:49:34.757395   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:49:34.761407   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 10:49:34.761497   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 10:49:34.855381   17853 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:34.855409   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 10:49:34.858937   17853 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 10:49:34.859024   17853 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	W0819 10:49:34.968939   17853 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-454931" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0819 10:49:34.968969   17853 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0819 10:49:35.355371   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 10:49:35.355468   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 10:49:35.356436   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:35.467109   17853 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:49:35.467136   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 10:49:35.556350   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 10:49:35.556383   17853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 10:49:35.962165   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:49:35.977541   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 10:49:35.977573   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 10:49:36.269135   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 10:49:36.269226   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 10:49:36.564891   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:49:36.564964   17853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 10:49:36.578432   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:36.858242   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:49:37.176316   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.721899924s)
	I0819 10:49:37.176443   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.721905101s)
	I0819 10:49:37.176663   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801379163s)
	I0819 10:49:37.660644   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.089405908s)
	I0819 10:49:37.660772   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.005546023s)
	I0819 10:49:37.660855   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.704936897s)
	W0819 10:49:37.861070   17853 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 10:49:37.957891   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.001033729s)
	I0819 10:49:37.957936   17853 addons.go:475] Verifying addon metrics-server=true in "addons-454931"
	I0819 10:49:38.970653   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:39.364596   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.30868271s)
	I0819 10:49:39.364666   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.308276546s)
	I0819 10:49:39.364685   17853 addons.go:475] Verifying addon ingress=true in "addons-454931"
	I0819 10:49:39.364694   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.010324286s)
	I0819 10:49:39.364704   17853 addons.go:475] Verifying addon registry=true in "addons-454931"
	I0819 10:49:39.364765   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.607282971s)
	I0819 10:49:39.366309   17853 out.go:177] * Verifying ingress addon...
	I0819 10:49:39.366311   17853 out.go:177] * Verifying registry addon...
	I0819 10:49:39.366310   17853 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-454931 service yakd-dashboard -n yakd-dashboard
	
	I0819 10:49:39.368710   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 10:49:39.369531   17853 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 10:49:39.376434   17853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 10:49:39.376455   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:39.376710   17853 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 10:49:39.376729   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:39.872871   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:39.873322   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:40.054582   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.09237239s)
	I0819 10:49:40.054658   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.697932367s)
	W0819 10:49:40.054735   17853 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:49:40.054764   17853 retry.go:31] will retry after 334.320597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:49:40.058043   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 10:49:40.058122   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:40.087684   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:40.371822   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:40.373272   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:40.375370   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 10:49:40.390195   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:40.396035   17853 addons.go:234] Setting addon gcp-auth=true in "addons-454931"
	I0819 10:49:40.396133   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:40.396639   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:40.413145   17853 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 10:49:40.413198   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:40.429933   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:40.773077   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.914730891s)
	I0819 10:49:40.773120   17853 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-454931"
	I0819 10:49:40.774276   17853 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 10:49:40.776206   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 10:49:40.782683   17853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 10:49:40.782703   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:40.872611   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:40.873453   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:41.280442   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:41.380492   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:41.380895   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:41.392804   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.002559864s)
	I0819 10:49:41.394210   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:41.395416   17853 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 10:49:41.396725   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 10:49:41.396743   17853 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 10:49:41.413454   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 10:49:41.413479   17853 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 10:49:41.429238   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:49:41.429259   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 10:49:41.445063   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:49:41.467991   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:41.775472   17853 addons.go:475] Verifying addon gcp-auth=true in "addons-454931"
	I0819 10:49:41.776787   17853 out.go:177] * Verifying gcp-auth addon...
	I0819 10:49:41.778774   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 10:49:41.779473   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:41.880558   17853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 10:49:41.880582   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:41.880564   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:41.881007   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:42.279654   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:42.281289   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:42.372277   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:42.373500   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:42.780122   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:42.782550   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:42.874177   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:42.875421   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.280170   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:43.282607   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:43.372438   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:43.372890   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.779736   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:43.781187   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:43.871904   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:43.873288   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.969010   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:44.280044   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:44.281867   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:44.372657   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:44.373305   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:44.779275   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:44.781573   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:44.872615   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:44.872999   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:45.279459   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:45.281129   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:45.371696   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:45.372643   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:45.779475   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:45.781034   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:45.871751   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:45.872492   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:46.279322   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:46.280786   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:46.372476   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:46.372726   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:46.466673   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:46.779941   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:46.781192   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:46.880715   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:46.881432   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:47.279626   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:47.281214   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:47.371906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:47.372873   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:47.780234   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:47.781524   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:47.872011   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:47.872505   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:48.279356   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:48.281050   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:48.371672   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:48.372736   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:48.466841   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:48.780022   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:48.781257   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:48.872000   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:48.872866   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:49.280224   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:49.281556   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:49.372122   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:49.372556   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:49.779355   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:49.781003   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:49.871197   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:49.872409   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:50.279595   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:50.281118   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:50.372006   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:50.372939   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:50.466908   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:50.779876   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:50.781272   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:50.871854   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:50.872754   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.280026   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:51.281377   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:51.371964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:51.372292   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.779582   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:51.781813   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:51.872413   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:51.872969   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.972295   17853 node_ready.go:49] node "addons-454931" has status "Ready":"True"
	I0819 10:49:51.972322   17853 node_ready.go:38] duration metric: took 17.508245592s for node "addons-454931" to be "Ready" ...
	I0819 10:49:51.972333   17853 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:49:51.987266   17853 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:52.281364   17853 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 10:49:52.281391   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:52.282637   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:52.375857   17853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 10:49:52.375890   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:52.376507   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:52.781915   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:52.783684   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:52.882882   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:52.883089   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.281273   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:53.281286   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:53.381031   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:53.381207   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.492741   17853 pod_ready.go:93] pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.492766   17853 pod_ready.go:82] duration metric: took 1.505464081s for pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.492776   17853 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.497456   17853 pod_ready.go:93] pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.497480   17853 pod_ready.go:82] duration metric: took 4.697892ms for pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.497498   17853 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.502040   17853 pod_ready.go:93] pod "etcd-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.502072   17853 pod_ready.go:82] duration metric: took 4.566739ms for pod "etcd-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.502092   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.506745   17853 pod_ready.go:93] pod "kube-apiserver-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.506768   17853 pod_ready.go:82] duration metric: took 4.668906ms for pod "kube-apiserver-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.506780   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.567946   17853 pod_ready.go:93] pod "kube-controller-manager-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.567968   17853 pod_ready.go:82] duration metric: took 61.181375ms for pod "kube-controller-manager-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.567981   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dmbm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.781581   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:53.781763   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:53.872888   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:53.873261   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.967287   17853 pod_ready.go:93] pod "kube-proxy-8dmbm" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.967324   17853 pod_ready.go:82] duration metric: took 399.337816ms for pod "kube-proxy-8dmbm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.967344   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.281496   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:54.281862   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:54.367276   17853 pod_ready.go:93] pod "kube-scheduler-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:54.367300   17853 pod_ready.go:82] duration metric: took 399.948456ms for pod "kube-scheduler-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.367311   17853 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-w697b" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.373254   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:54.373887   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:54.780800   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:54.781987   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:54.872060   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:54.873092   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:55.280997   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:55.281319   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:55.381627   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:55.382253   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:55.781627   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:55.782805   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:55.871898   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:55.873829   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:56.281253   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:56.282177   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:56.372681   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:56.373512   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:49:56.374336   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:56.781020   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:56.781673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:56.880042   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:56.880503   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:57.281302   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:57.281474   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:57.382112   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:57.382249   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:57.780731   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:57.781981   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:57.871599   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:57.872793   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:58.281315   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:58.281755   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:58.371558   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:58.372486   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:58.781208   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:58.781771   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:58.872175   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:49:58.872297   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:58.872534   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:59.280605   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:59.281708   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:59.373345   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:59.373912   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:59.781804   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:59.781940   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:59.871973   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:59.873246   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.281806   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:00.282475   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:00.371920   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:00.373554   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.780589   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:00.781068   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:00.871774   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:00.873288   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.873985   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:01.281226   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:01.282679   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:01.372551   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:01.373085   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:01.781470   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:01.784219   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:01.873574   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:01.875214   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:02.280648   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:02.281549   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:02.387435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:02.388221   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:02.780435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:02.781535   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:02.871826   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:02.873201   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:03.281805   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:03.283190   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:03.372185   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:03.373319   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:03.374009   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:03.782026   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:03.782315   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:03.871888   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:03.873652   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:04.281303   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:04.281811   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:04.371641   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:04.372602   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:04.780681   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:04.781858   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:04.871601   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:04.872948   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.281074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:05.281879   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:05.371821   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:05.373417   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.780916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:05.781413   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:05.872379   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:05.873158   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.873563   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:06.280659   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:06.281545   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:06.372421   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:06.373033   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:06.780573   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:06.781562   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:06.872906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:06.873059   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:07.281083   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:07.281400   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:07.372558   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:07.372968   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:07.780783   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:07.781574   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:07.872775   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:07.873232   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:08.280855   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:08.281677   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:08.372709   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:08.372741   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:08.373395   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:08.780598   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:08.781623   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:08.872878   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:08.873096   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:09.281742   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:09.282210   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:09.372414   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:09.375916   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:09.781412   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:09.783765   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:09.873607   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:09.875022   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:10.355342   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:10.356498   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:10.461029   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:10.463063   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:10.470623   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:10.857700   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:10.859749   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:10.873195   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:10.875703   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:11.282121   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:11.282926   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:11.371642   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:11.374767   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:11.781541   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:11.782264   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:11.872179   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:11.873224   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.281600   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:12.282952   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:12.372661   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:12.374203   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.781373   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:12.781794   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:12.872074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:12.873453   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.874088   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:13.282725   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:13.284144   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:13.372638   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:13.372761   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:13.780771   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:13.781306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:13.872727   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:13.874142   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:14.280906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:14.281852   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:14.371699   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:14.373308   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:14.780494   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:14.781710   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:14.873131   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:14.873541   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:15.281673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:15.281821   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:15.373179   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:15.382662   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:15.383013   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:15.781657   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:15.782052   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:15.872074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:15.873133   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:16.280998   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:16.281918   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:16.371662   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:16.373264   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:16.781973   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:16.782362   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:16.872173   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:16.873263   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:17.281806   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:17.281980   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:17.371820   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:17.373169   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:17.781296   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:17.781756   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:17.871746   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:17.872706   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:17.872707   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:18.281145   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:18.281741   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:18.373036   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:18.373513   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:18.781768   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:18.782140   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:18.881761   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:18.882315   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.281474   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:19.281727   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:19.372105   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:19.373222   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.781370   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:19.781568   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:19.872991   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:19.873342   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.873882   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:20.281143   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:20.282050   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:20.371789   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:20.373269   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:20.780593   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:20.781769   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:20.871999   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:20.872482   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:21.280499   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:21.281616   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:21.372472   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:21.372789   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:21.780369   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:21.781306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:21.871906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:21.873142   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:22.279825   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:22.281197   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:22.371915   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:22.373276   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:22.373743   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:22.780095   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:22.780799   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:22.872682   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:22.873374   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:23.281368   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:23.281589   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:23.372537   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:23.373478   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:23.782599   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:23.784929   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:23.872085   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:23.873033   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:24.280892   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:24.281550   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:24.373263   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:24.374216   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:24.781423   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:24.781676   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:24.873620   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:24.873757   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:24.873884   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:25.281410   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:25.281916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:25.371552   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:25.372690   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:25.780483   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:25.781281   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:25.872146   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:25.873822   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:26.281211   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:26.281691   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:26.373530   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:26.373934   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:26.780711   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:26.781250   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:26.871715   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:26.872736   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:27.280698   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:27.281350   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:27.372131   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:27.372437   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:27.372802   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:27.780665   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:27.781610   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:27.872165   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:27.872573   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:28.280785   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:28.281539   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:28.372597   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:28.372832   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:28.779949   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:28.780910   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:28.871306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:28.872455   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:29.281502   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:29.281586   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:29.372910   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:29.373315   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:29.373400   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:29.781673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:29.781878   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:29.881245   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:29.881557   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:30.280201   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:30.280928   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:30.371872   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:30.373152   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:30.781416   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:30.781421   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:30.872106   17853 kapi.go:107] duration metric: took 51.503395032s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 10:50:30.872783   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:31.281232   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:31.282145   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:31.373618   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:31.781449   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:31.781745   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:31.871887   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:31.872499   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:32.280789   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:32.281845   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:32.372852   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:32.780637   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:32.781631   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:32.872926   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:33.280499   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:33.281365   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:33.373313   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:33.781671   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:33.781791   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:33.872604   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:33.872981   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:34.283049   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:34.283473   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:34.373134   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:34.780135   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:34.780837   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:34.880048   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:35.280531   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:35.281244   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:35.373452   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:35.781322   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:35.781495   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:35.873105   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:35.873606   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:36.341632   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:36.342476   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:36.373623   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:36.781418   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:36.781839   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:36.873176   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.282167   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:37.283438   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:37.382321   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.780708   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:37.781384   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:37.873473   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.874354   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:38.281792   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:38.283346   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:38.373933   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:38.781391   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:38.781487   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:38.873802   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:39.280370   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:39.282215   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:39.383564   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:39.781307   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:39.781319   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:39.873652   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:40.281153   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:40.281722   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:40.374002   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:40.375110   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:40.780414   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:40.781064   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:40.874743   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:41.279736   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:41.281908   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:41.373868   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:41.779726   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:41.781974   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:41.873428   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:42.281363   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:42.281678   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:42.372724   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:42.781180   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:42.782269   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:42.873698   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:42.874991   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:43.280556   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:43.281356   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:43.373927   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:43.781040   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:43.781728   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:43.881402   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:44.280677   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:44.281323   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:44.373581   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:44.781697   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:44.781962   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:44.873430   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:45.280877   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:45.281535   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:45.372307   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:45.372905   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:45.779925   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:45.782064   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:45.874946   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:46.279964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:46.281934   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:46.373323   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:46.781967   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:46.782324   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:46.872990   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:47.281150   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:47.281420   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:47.377625   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:47.381368   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:47.810435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:47.810946   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:47.912465   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:48.280762   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:48.281287   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:48.373039   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:48.782084   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:48.783310   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:48.874856   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.281387   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:49.283916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:49.374186   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.780869   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:49.782101   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:49.873372   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.874090   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:50.359575   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:50.360012   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:50.378684   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:50.780394   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:50.781492   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:50.873275   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:51.281578   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:51.281925   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:51.372988   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:51.781385   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:51.781722   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:51.873206   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:52.281579   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:52.282074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:52.373554   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:52.374357   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:52.780519   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:52.781350   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:52.872913   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:53.281258   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:53.282388   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:53.373945   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:53.781225   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:53.781234   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:53.872964   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:54.280390   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:54.281631   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:54.373146   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:54.781431   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:54.781522   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:54.873312   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:54.873678   17853 kapi.go:107] duration metric: took 1m15.504147744s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 10:50:55.281724   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:55.282664   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:55.781305   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:55.881223   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:56.280958   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:56.281816   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:56.781153   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:56.781740   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:57.280617   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:57.281847   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:57.372827   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:57.780842   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:57.781411   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:58.281525   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:58.281964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:58.779566   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:58.781723   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:59.280765   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:59.281116   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:59.372859   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:59.781404   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:59.781547   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:51:00.281375   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:00.281437   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:51:00.781591   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:00.781673   17853 kapi.go:107] duration metric: took 1m19.002901269s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 10:51:00.783043   17853 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-454931 cluster.
	I0819 10:51:00.784390   17853 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 10:51:00.785681   17853 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 10:51:01.280068   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:01.373399   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:01.780642   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:02.280930   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:02.781388   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:03.281216   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:03.373450   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:03.780237   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:04.280205   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:04.781151   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:05.280746   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:05.780567   17853 kapi.go:107] duration metric: took 1m25.004358745s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 10:51:05.782407   17853 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, metrics-server, helm-tiller, yakd, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 10:51:05.783567   17853 addons.go:510] duration metric: took 1m33.044978222s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin default-storageclass metrics-server helm-tiller yakd inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 10:51:05.872824   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:08.372865   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:10.373155   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:12.872481   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:14.873599   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:16.874667   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:19.372331   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:21.372565   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:23.372958   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:25.373010   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:27.871982   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:29.873323   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:32.372886   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:34.873426   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:36.875398   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:39.373815   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:41.872834   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:44.372596   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:46.372686   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:48.872097   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:50.872444   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:52.872795   17853 pod_ready.go:93] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"True"
	I0819 10:51:52.872818   17853 pod_ready.go:82] duration metric: took 1m58.50549999s for pod "metrics-server-8988944d9-w697b" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.872830   17853 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.877173   17853 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace has status "Ready":"True"
	I0819 10:51:52.877196   17853 pod_ready.go:82] duration metric: took 4.360181ms for pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.877214   17853 pod_ready.go:39] duration metric: took 2m0.904868643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:51:52.877230   17853 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:51:52.877257   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:52.877314   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:52.914911   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:52.914938   17853 cri.go:89] found id: ""
	I0819 10:51:52.914948   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:52.915004   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.918448   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:52.918513   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:52.951922   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:52.951945   17853 cri.go:89] found id: ""
	I0819 10:51:52.951959   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:52.952017   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.955280   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:52.955339   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:52.989756   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:52.989779   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:52.989783   17853 cri.go:89] found id: ""
	I0819 10:51:52.989790   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:52.989845   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.993093   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.996208   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:52.996278   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:53.031761   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:53.031790   17853 cri.go:89] found id: ""
	I0819 10:51:53.031799   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:53.031845   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.035108   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:53.035189   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:53.068589   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:53.068616   17853 cri.go:89] found id: ""
	I0819 10:51:53.068625   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:53.068691   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.071994   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:53.072065   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:53.105767   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:53.105792   17853 cri.go:89] found id: ""
	I0819 10:51:53.105801   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:53.105862   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.109103   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:53.109168   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:53.143013   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:53.143038   17853 cri.go:89] found id: ""
	I0819 10:51:53.143047   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:53.143106   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.146616   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:51:53.146642   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:53.190231   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:51:53.190268   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:53.223493   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:51:53.223521   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:53.281859   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:51:53.281895   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:51:53.355149   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:51:53.355186   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:51:53.396722   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:53.396750   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:53.433556   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:53.433623   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:53.472654   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:51:53.472687   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:53.512392   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:53.512425   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:51:53.568456   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:51:53.568491   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:51:53.581351   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:51:53.581382   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:51:53.678182   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:51:53.678212   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:53.723799   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:51:53.723834   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.259755   17853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:51:56.273205   17853 api_server.go:72] duration metric: took 2m23.5346621s to wait for apiserver process to appear ...
	I0819 10:51:56.273228   17853 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:51:56.273263   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:56.273314   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:56.306898   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:56.306919   17853 cri.go:89] found id: ""
	I0819 10:51:56.306927   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:56.306986   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.310296   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:56.310350   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:56.343684   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:56.343711   17853 cri.go:89] found id: ""
	I0819 10:51:56.343719   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:56.343760   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.347064   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:56.347120   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:56.380312   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.380337   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:56.380342   17853 cri.go:89] found id: ""
	I0819 10:51:56.380349   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:56.380392   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.383690   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.386752   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:56.386816   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:56.418933   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:56.418956   17853 cri.go:89] found id: ""
	I0819 10:51:56.418964   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:56.419008   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.422291   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:56.422360   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:56.455813   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:56.455837   17853 cri.go:89] found id: ""
	I0819 10:51:56.455845   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:56.455885   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.459251   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:56.459324   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:56.492997   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:56.493020   17853 cri.go:89] found id: ""
	I0819 10:51:56.493028   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:56.493076   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.496459   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:56.496516   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:56.529763   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:56.529786   17853 cri.go:89] found id: ""
	I0819 10:51:56.529797   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:56.529849   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.533164   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:51:56.533190   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:51:56.634293   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:51:56.634330   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.669150   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:56.669182   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:56.710888   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:51:56.710921   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:56.743711   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:51:56.743736   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:56.798887   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:51:56.798929   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:51:56.876928   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:56.876968   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:51:56.930006   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:51:56.930041   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:51:56.942264   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:51:56.942293   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:56.987194   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:51:56.987226   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:57.030290   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:57.030319   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:57.066447   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:51:57.066482   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:57.106049   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:51:57.106084   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:51:59.648050   17853 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 10:51:59.651630   17853 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 10:51:59.652431   17853 api_server.go:141] control plane version: v1.31.0
	I0819 10:51:59.652452   17853 api_server.go:131] duration metric: took 3.379218933s to wait for apiserver health ...
	I0819 10:51:59.652460   17853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:51:59.652480   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:59.652526   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:59.686278   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:59.686297   17853 cri.go:89] found id: ""
	I0819 10:51:59.686305   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:59.686346   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.689372   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:59.689425   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:59.723001   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:59.723022   17853 cri.go:89] found id: ""
	I0819 10:51:59.723031   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:59.723090   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.726444   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:59.726520   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:59.759668   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:59.759692   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:59.759696   17853 cri.go:89] found id: ""
	I0819 10:51:59.759707   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:59.759768   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.763459   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.767030   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:59.767112   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:59.801139   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:59.801160   17853 cri.go:89] found id: ""
	I0819 10:51:59.801168   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:59.801223   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.804661   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:59.804727   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:59.837183   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:59.837202   17853 cri.go:89] found id: ""
	I0819 10:51:59.837208   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:59.837251   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.840821   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:59.840876   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:59.875289   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:59.875315   17853 cri.go:89] found id: ""
	I0819 10:51:59.875322   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:59.875365   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.878793   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:59.878862   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:59.911885   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:59.911909   17853 cri.go:89] found id: ""
	I0819 10:51:59.911919   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:59.911960   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.915146   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:59.915170   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:59.951104   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:59.951132   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:59.989911   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:59.989948   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:52:00.045238   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:52:00.045289   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:52:00.058175   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:52:00.058204   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:52:00.158288   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:52:00.158317   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:52:00.204448   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:52:00.204495   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:52:00.247754   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:52:00.247799   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:52:00.284966   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:52:00.284998   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:52:00.363031   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:52:00.363070   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:52:00.396795   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:52:00.396818   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:52:00.456396   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:52:00.456434   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:52:00.498202   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:52:00.498233   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:52:03.051974   17853 system_pods.go:59] 20 kube-system pods found
	I0819 10:52:03.052005   17853 system_pods.go:61] "coredns-6f6b679f8f-4lg4p" [68bbaa37-27c9-491f-9299-a9fbb8e3c6aa] Running
	I0819 10:52:03.052010   17853 system_pods.go:61] "coredns-6f6b679f8f-hrnrm" [6622e471-6bfe-4b7f-8472-c5fbc9a7a6aa] Running
	I0819 10:52:03.052014   17853 system_pods.go:61] "csi-hostpath-attacher-0" [55bd1bec-37db-4934-bf73-0fd7d404a31a] Running
	I0819 10:52:03.052018   17853 system_pods.go:61] "csi-hostpath-resizer-0" [bcacc22c-91ac-438f-9425-d9dee1d7f8e4] Running
	I0819 10:52:03.052021   17853 system_pods.go:61] "csi-hostpathplugin-dfmfz" [d62f85fe-9bf5-4f41-9f85-3657f60b6e20] Running
	I0819 10:52:03.052024   17853 system_pods.go:61] "etcd-addons-454931" [5df4cd50-b241-4d2d-8393-b1f5b8fdafc7] Running
	I0819 10:52:03.052027   17853 system_pods.go:61] "kindnet-82zcc" [60e4e9fc-e115-4f32-8217-740dd919dc7d] Running
	I0819 10:52:03.052030   17853 system_pods.go:61] "kube-apiserver-addons-454931" [22bdb559-bd55-4bb9-b545-0d6eec0f6230] Running
	I0819 10:52:03.052033   17853 system_pods.go:61] "kube-controller-manager-addons-454931" [61aa2aac-e0c0-47f7-9915-afca23cdb2da] Running
	I0819 10:52:03.052036   17853 system_pods.go:61] "kube-ingress-dns-minikube" [8c0f4e82-c7eb-4302-bbfc-b9a95ab55947] Running
	I0819 10:52:03.052039   17853 system_pods.go:61] "kube-proxy-8dmbm" [21b8778a-872e-41ff-89cb-1d6ef217e957] Running
	I0819 10:52:03.052042   17853 system_pods.go:61] "kube-scheduler-addons-454931" [f9f38926-033a-4916-8383-9ae977b6b3d0] Running
	I0819 10:52:03.052045   17853 system_pods.go:61] "metrics-server-8988944d9-w697b" [7c3b07c1-62d8-4b80-b68f-5f7a56a385a4] Running
	I0819 10:52:03.052049   17853 system_pods.go:61] "nvidia-device-plugin-daemonset-4xgtg" [9f3c31d4-b4dd-4fc8-b9c4-1ca0c24775c8] Running
	I0819 10:52:03.052053   17853 system_pods.go:61] "registry-6fb4cdfc84-v7654" [d56000ae-59d9-4ff4-afc3-c173d1aa817f] Running
	I0819 10:52:03.052056   17853 system_pods.go:61] "registry-proxy-sjwlk" [497530f4-1b24-4840-a1d3-6d7174146af0] Running
	I0819 10:52:03.052059   17853 system_pods.go:61] "snapshot-controller-56fcc65765-84zqr" [4cfe5ad2-0a88-4a39-9d55-f4d66d60ea3a] Running
	I0819 10:52:03.052063   17853 system_pods.go:61] "snapshot-controller-56fcc65765-jjwss" [99541df2-d840-480a-8652-8e38b7a53574] Running
	I0819 10:52:03.052066   17853 system_pods.go:61] "storage-provisioner" [b4d4a5ac-4c79-414c-a9e3-960d790962a5] Running
	I0819 10:52:03.052070   17853 system_pods.go:61] "tiller-deploy-b48cc5f79-cdqdx" [e734e815-6d31-40f3-98f0-cc7c3f38ba44] Running
	I0819 10:52:03.052076   17853 system_pods.go:74] duration metric: took 3.399611618s to wait for pod list to return data ...
	I0819 10:52:03.052088   17853 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:52:03.054114   17853 default_sa.go:45] found service account: "default"
	I0819 10:52:03.054135   17853 default_sa.go:55] duration metric: took 2.041965ms for default service account to be created ...
	I0819 10:52:03.054142   17853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:52:03.062236   17853 system_pods.go:86] 20 kube-system pods found
	I0819 10:52:03.062267   17853 system_pods.go:89] "coredns-6f6b679f8f-4lg4p" [68bbaa37-27c9-491f-9299-a9fbb8e3c6aa] Running
	I0819 10:52:03.062273   17853 system_pods.go:89] "coredns-6f6b679f8f-hrnrm" [6622e471-6bfe-4b7f-8472-c5fbc9a7a6aa] Running
	I0819 10:52:03.062278   17853 system_pods.go:89] "csi-hostpath-attacher-0" [55bd1bec-37db-4934-bf73-0fd7d404a31a] Running
	I0819 10:52:03.062283   17853 system_pods.go:89] "csi-hostpath-resizer-0" [bcacc22c-91ac-438f-9425-d9dee1d7f8e4] Running
	I0819 10:52:03.062287   17853 system_pods.go:89] "csi-hostpathplugin-dfmfz" [d62f85fe-9bf5-4f41-9f85-3657f60b6e20] Running
	I0819 10:52:03.062290   17853 system_pods.go:89] "etcd-addons-454931" [5df4cd50-b241-4d2d-8393-b1f5b8fdafc7] Running
	I0819 10:52:03.062293   17853 system_pods.go:89] "kindnet-82zcc" [60e4e9fc-e115-4f32-8217-740dd919dc7d] Running
	I0819 10:52:03.062297   17853 system_pods.go:89] "kube-apiserver-addons-454931" [22bdb559-bd55-4bb9-b545-0d6eec0f6230] Running
	I0819 10:52:03.062301   17853 system_pods.go:89] "kube-controller-manager-addons-454931" [61aa2aac-e0c0-47f7-9915-afca23cdb2da] Running
	I0819 10:52:03.062312   17853 system_pods.go:89] "kube-ingress-dns-minikube" [8c0f4e82-c7eb-4302-bbfc-b9a95ab55947] Running
	I0819 10:52:03.062315   17853 system_pods.go:89] "kube-proxy-8dmbm" [21b8778a-872e-41ff-89cb-1d6ef217e957] Running
	I0819 10:52:03.062320   17853 system_pods.go:89] "kube-scheduler-addons-454931" [f9f38926-033a-4916-8383-9ae977b6b3d0] Running
	I0819 10:52:03.062326   17853 system_pods.go:89] "metrics-server-8988944d9-w697b" [7c3b07c1-62d8-4b80-b68f-5f7a56a385a4] Running
	I0819 10:52:03.062331   17853 system_pods.go:89] "nvidia-device-plugin-daemonset-4xgtg" [9f3c31d4-b4dd-4fc8-b9c4-1ca0c24775c8] Running
	I0819 10:52:03.062335   17853 system_pods.go:89] "registry-6fb4cdfc84-v7654" [d56000ae-59d9-4ff4-afc3-c173d1aa817f] Running
	I0819 10:52:03.062339   17853 system_pods.go:89] "registry-proxy-sjwlk" [497530f4-1b24-4840-a1d3-6d7174146af0] Running
	I0819 10:52:03.062342   17853 system_pods.go:89] "snapshot-controller-56fcc65765-84zqr" [4cfe5ad2-0a88-4a39-9d55-f4d66d60ea3a] Running
	I0819 10:52:03.062355   17853 system_pods.go:89] "snapshot-controller-56fcc65765-jjwss" [99541df2-d840-480a-8652-8e38b7a53574] Running
	I0819 10:52:03.062358   17853 system_pods.go:89] "storage-provisioner" [b4d4a5ac-4c79-414c-a9e3-960d790962a5] Running
	I0819 10:52:03.062361   17853 system_pods.go:89] "tiller-deploy-b48cc5f79-cdqdx" [e734e815-6d31-40f3-98f0-cc7c3f38ba44] Running
	I0819 10:52:03.062368   17853 system_pods.go:126] duration metric: took 8.22126ms to wait for k8s-apps to be running ...
	I0819 10:52:03.062377   17853 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:52:03.062422   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:52:03.073756   17853 system_svc.go:56] duration metric: took 11.371549ms WaitForService to wait for kubelet
	I0819 10:52:03.073784   17853 kubeadm.go:582] duration metric: took 2m30.33524262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:52:03.073811   17853 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:52:03.076709   17853 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 10:52:03.076736   17853 node_conditions.go:123] node cpu capacity is 8
	I0819 10:52:03.076753   17853 node_conditions.go:105] duration metric: took 2.936409ms to run NodePressure ...
	I0819 10:52:03.076763   17853 start.go:241] waiting for startup goroutines ...
	I0819 10:52:03.076773   17853 start.go:246] waiting for cluster config update ...
	I0819 10:52:03.076796   17853 start.go:255] writing updated cluster config ...
	I0819 10:52:03.077085   17853 ssh_runner.go:195] Run: rm -f paused
	I0819 10:52:03.127849   17853 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 10:52:03.130499   17853 out.go:177] * Done! kubectl is now configured to use "addons-454931" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.051042228Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9260e7ff-66c6-4bf2-b758-ce5f0bfe71e9 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.051600402Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9260e7ff-66c6-4bf2-b758-ce5f0bfe71e9 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.052391735Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=904a5845-1b90-4fff-8d86-361b55bf8567 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.052916852Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=904a5845-1b90-4fff-8d86-361b55bf8567 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.053691939Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-9zzxq/hello-world-app" id=939528d9-dec2-43d3-b4d9-ed14c1e3ca3d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.053814506Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.068140070Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5648aeab4866666108cf4c465a197a197b90e8cdea0bc2f097561e0dc378a59a/merged/etc/passwd: no such file or directory"
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.068176772Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5648aeab4866666108cf4c465a197a197b90e8cdea0bc2f097561e0dc378a59a/merged/etc/group: no such file or directory"
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.104096095Z" level=info msg="Created container cc6c18b26eee1888aaa80be4961da5a92a2a83fc86fb647b5a7e8e7026993c3d: default/hello-world-app-55bf9c44b4-9zzxq/hello-world-app" id=939528d9-dec2-43d3-b4d9-ed14c1e3ca3d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.104832479Z" level=info msg="Starting container: cc6c18b26eee1888aaa80be4961da5a92a2a83fc86fb647b5a7e8e7026993c3d" id=6fbf62fc-7890-4cda-b512-f63b2c7ba56b name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.111545794Z" level=info msg="Started container" PID=11179 containerID=cc6c18b26eee1888aaa80be4961da5a92a2a83fc86fb647b5a7e8e7026993c3d description=default/hello-world-app-55bf9c44b4-9zzxq/hello-world-app id=6fbf62fc-7890-4cda-b512-f63b2c7ba56b name=/runtime.v1.RuntimeService/StartContainer sandboxID=18e8f424ac99c6beb4175092ec8a92938d12fd5b95b0a5b3275680e83da0813e
	Aug 19 10:55:17 addons-454931 crio[1027]: time="2024-08-19 10:55:17.969979798Z" level=warning msg="Stopping container 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=fb7ef040-2177-4328-9156-32cd42c270ab name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 10:55:18 addons-454931 conmon[5578]: conmon 77a4e190d16b65dd21cc <ninfo>: container 5590 exited with status 137
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.104767980Z" level=info msg="Stopped container 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066: ingress-nginx/ingress-nginx-controller-bc57996ff-5w8fz/controller" id=fb7ef040-2177-4328-9156-32cd42c270ab name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.105335350Z" level=info msg="Stopping pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=1c79db10-1b39-4a58-a463-38946875d7f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.109068116Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-JHLXJK7N7UFUCPIH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-MTAEARMDBHLSWKCK - [0:0]\n-X KUBE-HP-JHLXJK7N7UFUCPIH\n-X KUBE-HP-MTAEARMDBHLSWKCK\nCOMMIT\n"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.110576812Z" level=info msg="Closing host port tcp:80"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.110631922Z" level=info msg="Closing host port tcp:443"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.112143057Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.112168432Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.112367220Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-5w8fz Namespace:ingress-nginx ID:b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc UID:55a27035-d488-4661-b235-52df238e72e7 NetNS:/var/run/netns/1ab5493e-3b91-44c1-a575-1d2125dbd236 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.112526643Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-5w8fz from CNI network \"kindnet\" (type=ptp)"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.159537603Z" level=info msg="Stopped pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=1c79db10-1b39-4a58-a463-38946875d7f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.391125740Z" level=info msg="Removing container: 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066" id=951cde09-7d7c-4307-91f1-fef6b318c963 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.406577801Z" level=info msg="Removed container 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066: ingress-nginx/ingress-nginx-controller-bc57996ff-5w8fz/controller" id=951cde09-7d7c-4307-91f1-fef6b318c963 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc6c18b26eee1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        5 seconds ago       Running             hello-world-app           0                   18e8f424ac99c       hello-world-app-55bf9c44b4-9zzxq
	fabd3d00ff447       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   cbd68ca0cc955       nginx
	03a15738c9960       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   d543f674aeef7       busybox
	59c51a6ccc5b7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   13a46ef95d37b       local-path-provisioner-86d989889c-hvnxs
	4cc4bc5a71789       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              patch                     0                   6de7785a3d610       ingress-nginx-admission-patch-hz5tk
	7be7c5c1959e6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   e67682a0f458b       metrics-server-8988944d9-w697b
	a5bf1bddd60ff       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   5 minutes ago       Exited              create                    0                   eac484383a5a3       ingress-nginx-admission-create-gjp2j
	f6b69457461e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   4795ae57d4813       coredns-6f6b679f8f-4lg4p
	d18cf641bcb89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   9a2fb5fa91757       storage-provisioner
	efa219bf4f069       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   88ae1673af4e1       coredns-6f6b679f8f-hrnrm
	a291ab855f115       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           5 minutes ago       Running             kindnet-cni               0                   d478e18ee0139       kindnet-82zcc
	548017acd8f1a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   f41c989262885       kube-proxy-8dmbm
	7d39664256a4d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   ce4617c6e7341       kube-scheduler-addons-454931
	8e27c625be2e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   7146a60e9c386       kube-apiserver-addons-454931
	cc5123d3ccb34       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   e7a495b2a54ad       kube-controller-manager-addons-454931
	5aa227674dce3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   7acfce8976602       etcd-addons-454931
	
	
	==> coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] <==
	[INFO] 10.244.0.7:41958 - 33101 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077428s
	[INFO] 10.244.0.7:54471 - 52001 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089237s
	[INFO] 10.244.0.7:54471 - 42796 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001511s
	[INFO] 10.244.0.7:52201 - 59800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006429s
	[INFO] 10.244.0.7:52201 - 20380 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097268s
	[INFO] 10.244.0.7:54121 - 58433 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003850892s
	[INFO] 10.244.0.7:54121 - 59725 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005563286s
	[INFO] 10.244.0.7:52094 - 25866 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003271509s
	[INFO] 10.244.0.7:52094 - 29198 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003863361s
	[INFO] 10.244.0.7:56623 - 22681 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081281s
	[INFO] 10.244.0.7:56623 - 52634 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064844s
	[INFO] 10.244.0.7:60412 - 17149 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000442s
	[INFO] 10.244.0.7:60412 - 13025 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053967s
	[INFO] 10.244.0.7:42848 - 16461 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003665777s
	[INFO] 10.244.0.7:42848 - 49742 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003688822s
	[INFO] 10.244.0.7:40232 - 18365 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004089831s
	[INFO] 10.244.0.7:40232 - 50617 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004200287s
	[INFO] 10.244.0.7:57769 - 32646 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.002882573s
	[INFO] 10.244.0.7:57769 - 11403 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003776917s
	[INFO] 10.244.0.22:54533 - 64803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189906s
	[INFO] 10.244.0.22:39407 - 14247 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014852s
	[INFO] 10.244.0.22:33803 - 18565 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007448023s
	[INFO] 10.244.0.22:36783 - 15237 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00619233s
	[INFO] 10.244.0.22:52466 - 4793 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006281089s
	[INFO] 10.244.0.22:55602 - 1488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000815424s
	
	
	==> coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] <==
	[INFO] 10.244.0.7:51052 - 63765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126615s
	[INFO] 10.244.0.7:42284 - 24012 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003903686s
	[INFO] 10.244.0.7:42284 - 13775 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005355048s
	[INFO] 10.244.0.7:60296 - 37705 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003218804s
	[INFO] 10.244.0.7:60296 - 32589 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004250666s
	[INFO] 10.244.0.7:46083 - 61566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110167s
	[INFO] 10.244.0.7:46083 - 28536 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000139858s
	[INFO] 10.244.0.7:39968 - 43585 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004197253s
	[INFO] 10.244.0.7:39968 - 4164 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004436842s
	[INFO] 10.244.0.7:37474 - 54656 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083632s
	[INFO] 10.244.0.7:37474 - 25229 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105475s
	[INFO] 10.244.0.7:44404 - 31906 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157019s
	[INFO] 10.244.0.7:44404 - 25775 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018275s
	[INFO] 10.244.0.7:46216 - 25519 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000065258s
	[INFO] 10.244.0.7:46216 - 57011 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106283s
	[INFO] 10.244.0.22:60011 - 27675 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184004s
	[INFO] 10.244.0.22:58506 - 17004 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011187s
	[INFO] 10.244.0.22:44021 - 3426 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122507s
	[INFO] 10.244.0.22:39021 - 15645 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140122s
	[INFO] 10.244.0.22:46275 - 60929 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007999212s
	[INFO] 10.244.0.22:41570 - 62774 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004459121s
	[INFO] 10.244.0.22:34100 - 60625 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005328444s
	[INFO] 10.244.0.22:57989 - 60766 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000946131s
	[INFO] 10.244.0.27:49374 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191128s
	[INFO] 10.244.0.27:52591 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134612s
	
	
	==> describe nodes <==
	Name:               addons-454931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-454931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=addons-454931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_49_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-454931
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 10:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-454931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 10:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 10:53:32 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 10:53:32 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 10:53:32 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 10:53:32 +0000   Mon, 19 Aug 2024 10:49:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-454931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 960f55d6b6854585920b92aaf22992e8
	  System UUID:                1e7e9fae-fade-4d33-903a-36d9e09706d1
	  Boot ID:                    7f72e4de-82e3-4ac1-af0c-a667ff710ce9
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-world-app-55bf9c44b4-9zzxq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 coredns-6f6b679f8f-4lg4p                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m49s
	  kube-system                 coredns-6f6b679f8f-hrnrm                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m50s
	  kube-system                 etcd-addons-454931                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m55s
	  kube-system                 kindnet-82zcc                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m51s
	  kube-system                 kube-apiserver-addons-454931               250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-controller-manager-addons-454931      200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-proxy-8dmbm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-scheduler-addons-454931               100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 metrics-server-8988944d9-w697b             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m46s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  local-path-storage          local-path-provisioner-86d989889c-hvnxs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             490Mi (1%)   390Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m45s                kube-proxy       
	  Normal   NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node addons-454931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node addons-454931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node addons-454931 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m55s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m55s                kubelet          Node addons-454931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m55s                kubelet          Node addons-454931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m55s                kubelet          Node addons-454931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m51s                node-controller  Node addons-454931 event: Registered Node addons-454931 in Controller
	  Normal   NodeReady                5m32s                kubelet          Node addons-454931 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001354] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001371] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001459] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001282] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.572178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.045427] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006439] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014067] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002552] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013877] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.453746] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 10:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +1.007780] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +2.011814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +4.063599] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +8.191202] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[ +16.126423] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[Aug19 10:54] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	
	
	==> etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] <==
	{"level":"info","ts":"2024-08-19T10:49:34.373000Z","caller":"traceutil/trace.go:171","msg":"trace[403933077] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"100.302724ms","start":"2024-08-19T10:49:34.272681Z","end":"2024-08-19T10:49:34.372983Z","steps":["trace[403933077] 'process raft request'  (duration: 100.198334ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.554228Z","caller":"traceutil/trace.go:171","msg":"trace[839491864] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"188.931632ms","start":"2024-08-19T10:49:34.365273Z","end":"2024-08-19T10:49:34.554205Z","steps":["trace[839491864] 'process raft request'  (duration: 92.863023ms)","trace[839491864] 'compare'  (duration: 95.679414ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:34.857777Z","caller":"traceutil/trace.go:171","msg":"trace[488263906] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"183.060142ms","start":"2024-08-19T10:49:34.673348Z","end":"2024-08-19T10:49:34.856408Z","steps":["trace[488263906] 'process raft request'  (duration: 182.828447ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.960620Z","caller":"traceutil/trace.go:171","msg":"trace[1361221828] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:396; }","duration":"184.126501ms","start":"2024-08-19T10:49:34.776477Z","end":"2024-08-19T10:49:34.960603Z","steps":["trace[1361221828] 'read index received'  (duration: 79.657019ms)","trace[1361221828] 'applied index is now lower than readState.Index'  (duration: 104.468828ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T10:49:34.960763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.265901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T10:49:34.960796Z","caller":"traceutil/trace.go:171","msg":"trace[378239969] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:389; }","duration":"184.312374ms","start":"2024-08-19T10:49:34.776473Z","end":"2024-08-19T10:49:34.960786Z","steps":["trace[378239969] 'agreement among raft nodes before linearized reading'  (duration: 184.207526ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961044Z","caller":"traceutil/trace.go:171","msg":"trace[760802916] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"287.173248ms","start":"2024-08-19T10:49:34.673845Z","end":"2024-08-19T10:49:34.961019Z","steps":["trace[760802916] 'process raft request'  (duration: 195.844543ms)","trace[760802916] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4016; } (duration: 90.366506ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:34.961221Z","caller":"traceutil/trace.go:171","msg":"trace[1706789155] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"285.707791ms","start":"2024-08-19T10:49:34.675490Z","end":"2024-08-19T10:49:34.961213Z","steps":["trace[1706789155] 'process raft request'  (duration: 284.917608ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961320Z","caller":"traceutil/trace.go:171","msg":"trace[1516592545] transaction","detail":"{read_only:false; number_of_response:1; response_revision:388; }","duration":"200.300701ms","start":"2024-08-19T10:49:34.761013Z","end":"2024-08-19T10:49:34.961314Z","steps":["trace[1516592545] 'process raft request'  (duration: 199.471609ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961407Z","caller":"traceutil/trace.go:171","msg":"trace[780529673] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"185.142231ms","start":"2024-08-19T10:49:34.776253Z","end":"2024-08-19T10:49:34.961396Z","steps":["trace[780529673] 'process raft request'  (duration: 184.281163ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754190Z","caller":"traceutil/trace.go:171","msg":"trace[451347578] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"189.192755ms","start":"2024-08-19T10:49:35.564977Z","end":"2024-08-19T10:49:35.754170Z","steps":["trace[451347578] 'process raft request'  (duration: 97.341338ms)","trace[451347578] 'compare'  (duration: 91.585712ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:35.754514Z","caller":"traceutil/trace.go:171","msg":"trace[2127898163] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:412; }","duration":"184.70333ms","start":"2024-08-19T10:49:35.569797Z","end":"2024-08-19T10:49:35.754500Z","steps":["trace[2127898163] 'read index received'  (duration: 92.531543ms)","trace[2127898163] 'applied index is now lower than readState.Index'  (duration: 92.170961ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T10:49:35.754606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.791695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T10:49:35.756684Z","caller":"traceutil/trace.go:171","msg":"trace[1479245646] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:406; }","duration":"186.87491ms","start":"2024-08-19T10:49:35.569792Z","end":"2024-08-19T10:49:35.756667Z","steps":["trace[1479245646] 'agreement among raft nodes before linearized reading'  (duration: 184.751451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:35.756750Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.894706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-454931\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-08-19T10:49:35.754635Z","caller":"traceutil/trace.go:171","msg":"trace[1699164827] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"184.604571ms","start":"2024-08-19T10:49:35.570021Z","end":"2024-08-19T10:49:35.754626Z","steps":["trace[1699164827] 'process raft request'  (duration: 184.122714ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754675Z","caller":"traceutil/trace.go:171","msg":"trace[1702706134] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"100.310775ms","start":"2024-08-19T10:49:35.654353Z","end":"2024-08-19T10:49:35.754664Z","steps":["trace[1702706134] 'process raft request'  (duration: 99.905567ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754768Z","caller":"traceutil/trace.go:171","msg":"trace[675679215] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"100.505413ms","start":"2024-08-19T10:49:35.654254Z","end":"2024-08-19T10:49:35.754759Z","steps":["trace[675679215] 'process raft request'  (duration: 99.976845ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.757920Z","caller":"traceutil/trace.go:171","msg":"trace[40001997] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-454931; range_end:; response_count:1; response_revision:406; }","duration":"188.06874ms","start":"2024-08-19T10:49:35.569835Z","end":"2024-08-19T10:49:35.757904Z","steps":["trace[40001997] 'agreement among raft nodes before linearized reading'  (duration: 186.855991ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:36.656939Z","caller":"traceutil/trace.go:171","msg":"trace[2109661097] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"186.960435ms","start":"2024-08-19T10:49:36.469964Z","end":"2024-08-19T10:49:36.656925Z","steps":["trace[2109661097] 'process raft request'  (duration: 186.922523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:36.657528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.860907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2024-08-19T10:49:36.657826Z","caller":"traceutil/trace.go:171","msg":"trace[1293447703] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:450; }","duration":"101.164818ms","start":"2024-08-19T10:49:36.556647Z","end":"2024-08-19T10:49:36.657812Z","steps":["trace[1293447703] 'agreement among raft nodes before linearized reading'  (duration: 100.826319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:37.272942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.01887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-08-19T10:49:37.273084Z","caller":"traceutil/trace.go:171","msg":"trace[304421460] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:507; }","duration":"103.165352ms","start":"2024-08-19T10:49:37.169903Z","end":"2024-08-19T10:49:37.273069Z","steps":["trace[304421460] 'agreement among raft nodes before linearized reading'  (duration: 102.991635ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:50:47.808016Z","caller":"traceutil/trace.go:171","msg":"trace[619413792] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"110.038435ms","start":"2024-08-19T10:50:47.697956Z","end":"2024-08-19T10:50:47.807995Z","steps":["trace[619413792] 'process raft request'  (duration: 43.068904ms)","trace[619413792] 'compare'  (duration: 66.883694ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:55:23 up 37 min,  0 users,  load average: 0.16, 0.37, 0.23
	Linux addons-454931 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] <==
	E0819 10:54:07.115042       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 10:54:11.754793       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:54:11.754841       1 main.go:299] handling current node
	W0819 10:54:18.694176       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 10:54:18.694217       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 10:54:21.754872       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:54:21.754913       1 main.go:299] handling current node
	I0819 10:54:31.754289       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:54:31.754333       1 main.go:299] handling current node
	I0819 10:54:41.755057       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:54:41.755102       1 main.go:299] handling current node
	W0819 10:54:46.363310       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 10:54:46.363346       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 10:54:51.755084       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:54:51.755129       1 main.go:299] handling current node
	W0819 10:54:59.693251       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 10:54:59.693290       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 10:55:00.956492       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 10:55:00.956534       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 10:55:01.754839       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:55:01.754884       1 main.go:299] handling current node
	I0819 10:55:11.754943       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:55:11.754982       1 main.go:299] handling current node
	I0819 10:55:21.754846       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:55:21.754883       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] <==
	I0819 10:51:52.616521       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 10:52:12.561045       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48696: use of closed network connection
	E0819 10:52:12.778823       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48716: use of closed network connection
	I0819 10:52:37.512890       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 10:52:38.047013       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0819 10:52:47.189847       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.29:35624: read: connection reset by peer
	I0819 10:52:48.150453       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 10:52:49.168051       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 10:52:49.873716       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 10:52:50.036906       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.247.106"}
	I0819 10:52:54.800483       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.73.172"}
	I0819 10:53:10.430472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.430525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.443973       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.444117       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.445463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.445500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.454880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.455020       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.465966       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.466081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 10:53:11.445608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 10:53:11.466925       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 10:53:11.568764       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 10:55:12.864345       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.87.86"}
	
	
	==> kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] <==
	W0819 10:53:51.202974       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:53:51.203020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:54:06.883558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:54:06.883593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:54:18.769553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:54:18.769601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:54:20.991976       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:54:20.992018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:54:38.568060       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:54:38.568105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:55:00.923726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:55:00.923778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:55:08.644155       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:55:08.644204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 10:55:12.640233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.255829ms"
	I0819 10:55:12.645982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.689636ms"
	I0819 10:55:12.646085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="65.385µs"
	I0819 10:55:12.648544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.228µs"
	W0819 10:55:14.790260       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:55:14.790303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 10:55:14.923093       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 10:55:14.924943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="10.051µs"
	I0819 10:55:14.956288       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0819 10:55:17.405502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.539774ms"
	I0819 10:55:17.405599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.396µs"
	
	
	==> kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] <==
	I0819 10:49:36.266759       1 server_linux.go:66] "Using iptables proxy"
	I0819 10:49:37.175542       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 10:49:37.175719       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 10:49:37.558077       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 10:49:37.558203       1 server_linux.go:169] "Using iptables Proxier"
	I0819 10:49:37.568736       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 10:49:37.572784       1 server.go:483] "Version info" version="v1.31.0"
	I0819 10:49:37.573063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:49:37.759310       1 config.go:197] "Starting service config controller"
	I0819 10:49:37.763009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 10:49:37.762332       1 config.go:326] "Starting node config controller"
	I0819 10:49:37.763162       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 10:49:37.762364       1 config.go:104] "Starting endpoint slice config controller"
	I0819 10:49:37.763216       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 10:49:37.863577       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 10:49:37.863616       1 shared_informer.go:320] Caches are synced for service config
	I0819 10:49:37.863748       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] <==
	W0819 10:49:25.583348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 10:49:25.583569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:25.583359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:25.583603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:25.583420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 10:49:25.583632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.484862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 10:49:26.484899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.505185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 10:49:26.505234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.584095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:26.584137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.621657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 10:49:26.621705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.628971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:26.629012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.697928       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 10:49:26.697971       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 10:49:26.712491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 10:49:26.712528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.714528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 10:49:26.714567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.732943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 10:49:26.732986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 10:49:29.882301       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 10:55:12 addons-454931 kubelet[1625]: I0819 10:55:12.640806    1625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e18ef25-29a2-4236-8d0c-71437898a75b" containerName="headlamp"
	Aug 19 10:55:12 addons-454931 kubelet[1625]: I0819 10:55:12.811647    1625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmm7\" (UniqueName: \"kubernetes.io/projected/36beacd3-0315-4191-ae43-53e7aa9b5e1d-kube-api-access-tfmm7\") pod \"hello-world-app-55bf9c44b4-9zzxq\" (UID: \"36beacd3-0315-4191-ae43-53e7aa9b5e1d\") " pod="default/hello-world-app-55bf9c44b4-9zzxq"
	Aug 19 10:55:13 addons-454931 kubelet[1625]: I0819 10:55:13.817046    1625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9gp\" (UniqueName: \"kubernetes.io/projected/8c0f4e82-c7eb-4302-bbfc-b9a95ab55947-kube-api-access-sr9gp\") pod \"8c0f4e82-c7eb-4302-bbfc-b9a95ab55947\" (UID: \"8c0f4e82-c7eb-4302-bbfc-b9a95ab55947\") "
	Aug 19 10:55:13 addons-454931 kubelet[1625]: I0819 10:55:13.818884    1625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0f4e82-c7eb-4302-bbfc-b9a95ab55947-kube-api-access-sr9gp" (OuterVolumeSpecName: "kube-api-access-sr9gp") pod "8c0f4e82-c7eb-4302-bbfc-b9a95ab55947" (UID: "8c0f4e82-c7eb-4302-bbfc-b9a95ab55947"). InnerVolumeSpecName "kube-api-access-sr9gp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 10:55:13 addons-454931 kubelet[1625]: I0819 10:55:13.917530    1625 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sr9gp\" (UniqueName: \"kubernetes.io/projected/8c0f4e82-c7eb-4302-bbfc-b9a95ab55947-kube-api-access-sr9gp\") on node \"addons-454931\" DevicePath \"\""
	Aug 19 10:55:14 addons-454931 kubelet[1625]: I0819 10:55:14.376898    1625 scope.go:117] "RemoveContainer" containerID="f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904"
	Aug 19 10:55:14 addons-454931 kubelet[1625]: I0819 10:55:14.394798    1625 scope.go:117] "RemoveContainer" containerID="f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904"
	Aug 19 10:55:14 addons-454931 kubelet[1625]: E0819 10:55:14.395315    1625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904\": container with ID starting with f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904 not found: ID does not exist" containerID="f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904"
	Aug 19 10:55:14 addons-454931 kubelet[1625]: I0819 10:55:14.395356    1625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904"} err="failed to get container status \"f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904\": rpc error: code = NotFound desc = could not find container \"f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904\": container with ID starting with f1dcabef3143398a4746747a58c7c1bb0725eb9075a731941c6eee362a2b9904 not found: ID does not exist"
	Aug 19 10:55:16 addons-454931 kubelet[1625]: I0819 10:55:16.165353    1625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d6052b-2678-40b7-a13a-4bc3c6638038" path="/var/lib/kubelet/pods/08d6052b-2678-40b7-a13a-4bc3c6638038/volumes"
	Aug 19 10:55:16 addons-454931 kubelet[1625]: I0819 10:55:16.165799    1625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0f4e82-c7eb-4302-bbfc-b9a95ab55947" path="/var/lib/kubelet/pods/8c0f4e82-c7eb-4302-bbfc-b9a95ab55947/volumes"
	Aug 19 10:55:16 addons-454931 kubelet[1625]: I0819 10:55:16.166157    1625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef0e20e-8ce6-4ba9-9c68-6fb0bf266d65" path="/var/lib/kubelet/pods/9ef0e20e-8ce6-4ba9-9c68-6fb0bf266d65/volumes"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.272960    1625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55a27035-d488-4661-b235-52df238e72e7-webhook-cert\") pod \"55a27035-d488-4661-b235-52df238e72e7\" (UID: \"55a27035-d488-4661-b235-52df238e72e7\") "
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.273034    1625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grlvv\" (UniqueName: \"kubernetes.io/projected/55a27035-d488-4661-b235-52df238e72e7-kube-api-access-grlvv\") pod \"55a27035-d488-4661-b235-52df238e72e7\" (UID: \"55a27035-d488-4661-b235-52df238e72e7\") "
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.274958    1625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a27035-d488-4661-b235-52df238e72e7-kube-api-access-grlvv" (OuterVolumeSpecName: "kube-api-access-grlvv") pod "55a27035-d488-4661-b235-52df238e72e7" (UID: "55a27035-d488-4661-b235-52df238e72e7"). InnerVolumeSpecName "kube-api-access-grlvv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.274987    1625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a27035-d488-4661-b235-52df238e72e7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "55a27035-d488-4661-b235-52df238e72e7" (UID: "55a27035-d488-4661-b235-52df238e72e7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.373774    1625 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55a27035-d488-4661-b235-52df238e72e7-webhook-cert\") on node \"addons-454931\" DevicePath \"\""
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.373823    1625 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-grlvv\" (UniqueName: \"kubernetes.io/projected/55a27035-d488-4661-b235-52df238e72e7-kube-api-access-grlvv\") on node \"addons-454931\" DevicePath \"\""
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.390052    1625 scope.go:117] "RemoveContainer" containerID="77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: E0819 10:55:18.401555    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724064918401306477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: E0819 10:55:18.401597    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724064918401306477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.406848    1625 scope.go:117] "RemoveContainer" containerID="77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: E0819 10:55:18.407308    1625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066\": container with ID starting with 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066 not found: ID does not exist" containerID="77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066"
	Aug 19 10:55:18 addons-454931 kubelet[1625]: I0819 10:55:18.407347    1625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066"} err="failed to get container status \"77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066\": rpc error: code = NotFound desc = could not find container \"77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066\": container with ID starting with 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066 not found: ID does not exist"
	Aug 19 10:55:20 addons-454931 kubelet[1625]: I0819 10:55:20.166198    1625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a27035-d488-4661-b235-52df238e72e7" path="/var/lib/kubelet/pods/55a27035-d488-4661-b235-52df238e72e7/volumes"
	
	
	==> storage-provisioner [d18cf641bcb894f80055948d4b524f525fef195a0f0db22c91cca43266b781de] <==
	I0819 10:49:52.961924       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 10:49:52.971714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 10:49:52.971776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 10:49:52.983906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 10:49:52.984062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f!
	I0819 10:49:52.984083       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"062087cb-c6cc-4539-9bb4-d3dfe225f675", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f became leader
	I0819 10:49:53.085004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-454931 -n addons-454931
helpers_test.go:261: (dbg) Run:  kubectl --context addons-454931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.053991ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-w697b" [7c3b07c1-62d8-4b80-b68f-5f7a56a385a4] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003474606s
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (67.876151ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m9.399517399s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (66.962922ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m13.323935579s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (69.810687ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m18.921745391s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (82.916652ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m24.376078496s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (62.764845ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m34.786910527s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (61.555602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 3m56.911410966s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (63.028033ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 4m9.93161108s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (61.691927ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 4m52.82471967s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (67.765693ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 5m35.821287421s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (64.893337ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 6m17.369294615s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (61.566942ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 7m26.649461942s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-454931 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-454931 top pods -n kube-system: exit status 1 (61.955136ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-4lg4p, age: 8m49.868687769s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-454931
helpers_test.go:235: (dbg) docker inspect addons-454931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75",
	        "Created": "2024-08-19T10:49:12.63298428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T10:49:12.765700387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/hosts",
	        "LogPath": "/var/lib/docker/containers/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75/9ced8e49789dee9e05d6cefd0d92f50caa53f7e366483340a8eae6f7e0f42f75-json.log",
	        "Name": "/addons-454931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-454931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-454931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c-init/diff:/var/lib/docker/overlay2/fa7200b92f30b05c6ff80b9438668c67d163f11b4c83e2bafd3c170c7f60ea40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65ea459b84f53c92c7844e22b7e3fb8c0b9c1f93de58dddaa32fea9e56e7114c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-454931",
	                "Source": "/var/lib/docker/volumes/addons-454931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-454931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-454931",
	                "name.minikube.sigs.k8s.io": "addons-454931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1600eb3e74b17b8c11dddf19fc52757a4a16a1141749e25aea91d3fae69cb7be",
	            "SandboxKey": "/var/run/docker/netns/1600eb3e74b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-454931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f80651d11008c8dc4bf10db3eedb33a79b04c57ebb24d7f95b0f6e3807438d87",
	                    "EndpointID": "f07c36fb1b203e6347481cac6cb7b8d0f62787aed26dd282784cb93c0c11c71a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-454931",
	                        "9ced8e49789d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-454931 -n addons-454931
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 logs -n 25: (1.161238772s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-492817                                                                   | download-docker-492817 | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-843469   | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | binary-mirror-843469                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33413                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-843469                                                                     | binary-mirror-843469   | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-454931 --wait=true                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-454931 ssh cat                                                                       | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | /opt/local-path-provisioner/pvc-6f8c5a14-e9d6-473e-8f6f-d18080db96da_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| ip      | addons-454931 ip                                                                            | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | -p addons-454931                                                                            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | addons-454931                                                                               |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:52 UTC | 19 Aug 24 10:52 UTC |
	|         | -p addons-454931                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-454931 ssh curl -s                                                                   | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-454931 addons                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:53 UTC | 19 Aug 24 10:53 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-454931 ip                                                                            | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-454931 addons disable                                                                | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:55 UTC | 19 Aug 24 10:55 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-454931 addons                                                                        | addons-454931          | jenkins | v1.33.1 | 19 Aug 24 10:58 UTC | 19 Aug 24 10:58 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:48:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:48:50.161206   17853 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:48:50.161479   17853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:50.161488   17853 out.go:358] Setting ErrFile to fd 2...
	I0819 10:48:50.161493   17853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:50.161716   17853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 10:48:50.162371   17853 out.go:352] Setting JSON to false
	I0819 10:48:50.163154   17853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1870,"bootTime":1724062660,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:48:50.163212   17853 start.go:139] virtualization: kvm guest
	I0819 10:48:50.165454   17853 out.go:177] * [addons-454931] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 10:48:50.166710   17853 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 10:48:50.166747   17853 notify.go:220] Checking for updates...
	I0819 10:48:50.169301   17853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:48:50.170562   17853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:48:50.171735   17853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 10:48:50.172965   17853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 10:48:50.174057   17853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:48:50.175322   17853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:48:50.196925   17853 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 10:48:50.197064   17853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:50.246716   17853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:50.237681928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:50.246819   17853 docker.go:307] overlay module found
	I0819 10:48:50.248604   17853 out.go:177] * Using the docker driver based on user configuration
	I0819 10:48:50.249841   17853 start.go:297] selected driver: docker
	I0819 10:48:50.249863   17853 start.go:901] validating driver "docker" against <nil>
	I0819 10:48:50.249874   17853 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:48:50.250607   17853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:50.297381   17853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:50.288584744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:50.297532   17853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:48:50.297776   17853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:48:50.299259   17853 out.go:177] * Using Docker driver with root privileges
	I0819 10:48:50.300409   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:48:50.300426   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:48:50.300440   17853 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:48:50.300512   17853 start.go:340] cluster config:
	{Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:48:50.301792   17853 out.go:177] * Starting "addons-454931" primary control-plane node in "addons-454931" cluster
	I0819 10:48:50.303131   17853 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 10:48:50.304324   17853 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 10:48:50.305663   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:48:50.305699   17853 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:48:50.305710   17853 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:50.305749   17853 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 10:48:50.305794   17853 preload.go:172] Found /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 10:48:50.305806   17853 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 10:48:50.306098   17853 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json ...
	I0819 10:48:50.306123   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json: {Name:mk3c980c39a9d2b1e735137a0236c438a7a88525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:50.321402   17853 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 10:48:50.321533   17853 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 10:48:50.321550   17853 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 10:48:50.321556   17853 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 10:48:50.321570   17853 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 10:48:50.321581   17853 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 10:49:02.642652   17853 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 10:49:02.642684   17853 cache.go:194] Successfully downloaded all kic artifacts
	I0819 10:49:02.642726   17853 start.go:360] acquireMachinesLock for addons-454931: {Name:mkabded988b43486bb8e374098ad1d731f0bf562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:49:02.642843   17853 start.go:364] duration metric: took 99.225µs to acquireMachinesLock for "addons-454931"
	I0819 10:49:02.642867   17853 start.go:93] Provisioning new machine with config: &{Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:49:02.642947   17853 start.go:125] createHost starting for "" (driver="docker")
	I0819 10:49:02.648503   17853 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 10:49:02.648740   17853 start.go:159] libmachine.API.Create for "addons-454931" (driver="docker")
	I0819 10:49:02.648768   17853 client.go:168] LocalClient.Create starting
	I0819 10:49:02.648864   17853 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem
	I0819 10:49:02.705200   17853 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem
	I0819 10:49:02.997153   17853 cli_runner.go:164] Run: docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 10:49:03.013106   17853 cli_runner.go:211] docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 10:49:03.013193   17853 network_create.go:284] running [docker network inspect addons-454931] to gather additional debugging logs...
	I0819 10:49:03.013216   17853 cli_runner.go:164] Run: docker network inspect addons-454931
	W0819 10:49:03.029224   17853 cli_runner.go:211] docker network inspect addons-454931 returned with exit code 1
	I0819 10:49:03.029253   17853 network_create.go:287] error running [docker network inspect addons-454931]: docker network inspect addons-454931: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-454931 not found
	I0819 10:49:03.029264   17853 network_create.go:289] output of [docker network inspect addons-454931]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-454931 not found
	
	** /stderr **
	I0819 10:49:03.029378   17853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 10:49:03.045263   17853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a572c0}
	I0819 10:49:03.045308   17853 network_create.go:124] attempt to create docker network addons-454931 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 10:49:03.045356   17853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-454931 addons-454931
	I0819 10:49:03.107467   17853 network_create.go:108] docker network addons-454931 192.168.49.0/24 created
	I0819 10:49:03.107498   17853 kic.go:121] calculated static IP "192.168.49.2" for the "addons-454931" container
	I0819 10:49:03.107561   17853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 10:49:03.122899   17853 cli_runner.go:164] Run: docker volume create addons-454931 --label name.minikube.sigs.k8s.io=addons-454931 --label created_by.minikube.sigs.k8s.io=true
	I0819 10:49:03.140217   17853 oci.go:103] Successfully created a docker volume addons-454931
	I0819 10:49:03.140294   17853 cli_runner.go:164] Run: docker run --rm --name addons-454931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --entrypoint /usr/bin/test -v addons-454931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 10:49:08.057033   17853 cli_runner.go:217] Completed: docker run --rm --name addons-454931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --entrypoint /usr/bin/test -v addons-454931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (4.916691947s)
	I0819 10:49:08.057063   17853 oci.go:107] Successfully prepared a docker volume addons-454931
	I0819 10:49:08.057083   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:49:08.057117   17853 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 10:49:08.057189   17853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 10:49:12.570941   17853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.513691138s)
	I0819 10:49:12.570978   17853 kic.go:203] duration metric: took 4.513868843s to extract preloaded images to volume ...
	W0819 10:49:12.571107   17853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 10:49:12.571193   17853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 10:49:12.618523   17853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-454931 --name addons-454931 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454931 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-454931 --network addons-454931 --ip 192.168.49.2 --volume addons-454931:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 10:49:12.928131   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Running}}
	I0819 10:49:12.945615   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:12.962954   17853 cli_runner.go:164] Run: docker exec addons-454931 stat /var/lib/dpkg/alternatives/iptables
	I0819 10:49:13.004581   17853 oci.go:144] the created container "addons-454931" has a running status.
	I0819 10:49:13.004618   17853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa...
	I0819 10:49:13.066647   17853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 10:49:13.086843   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:13.103329   17853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 10:49:13.103351   17853 kic_runner.go:114] Args: [docker exec --privileged addons-454931 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 10:49:13.144531   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:13.164563   17853 machine.go:93] provisionDockerMachine start ...
	I0819 10:49:13.164642   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:13.186723   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:13.186946   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:13.186960   17853 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:49:13.187603   17853 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48226->127.0.0.1:32768: read: connection reset by peer
	I0819 10:49:16.305065   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-454931
	
	I0819 10:49:16.305090   17853 ubuntu.go:169] provisioning hostname "addons-454931"
	I0819 10:49:16.305149   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.322424   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.322598   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.322612   17853 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-454931 && echo "addons-454931" | sudo tee /etc/hostname
	I0819 10:49:16.448802   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-454931
	
	I0819 10:49:16.448864   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.467862   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.468028   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.468044   17853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-454931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-454931/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-454931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:49:16.589625   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:49:16.589677   17853 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19476-9624/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-9624/.minikube}
	I0819 10:49:16.589699   17853 ubuntu.go:177] setting up certificates
	I0819 10:49:16.589709   17853 provision.go:84] configureAuth start
	I0819 10:49:16.589760   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:16.608983   17853 provision.go:143] copyHostCerts
	I0819 10:49:16.609066   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/ca.pem (1082 bytes)
	I0819 10:49:16.609177   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/cert.pem (1123 bytes)
	I0819 10:49:16.609237   17853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-9624/.minikube/key.pem (1679 bytes)
	I0819 10:49:16.609283   17853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem org=jenkins.addons-454931 san=[127.0.0.1 192.168.49.2 addons-454931 localhost minikube]
	I0819 10:49:16.701989   17853 provision.go:177] copyRemoteCerts
	I0819 10:49:16.702045   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:49:16.702076   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.719028   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:16.805893   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:49:16.827909   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:49:16.849709   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:49:16.871916   17853 provision.go:87] duration metric: took 282.195712ms to configureAuth
	I0819 10:49:16.871946   17853 ubuntu.go:193] setting minikube options for container-runtime
	I0819 10:49:16.872111   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:16.872214   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:16.888761   17853 main.go:141] libmachine: Using SSH client type: native
	I0819 10:49:16.888915   17853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 10:49:16.888929   17853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 10:49:17.095244   17853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 10:49:17.095268   17853 machine.go:96] duration metric: took 3.930681597s to provisionDockerMachine
	I0819 10:49:17.095279   17853 client.go:171] duration metric: took 14.446505109s to LocalClient.Create
	I0819 10:49:17.095298   17853 start.go:167] duration metric: took 14.446561239s to libmachine.API.Create "addons-454931"
	I0819 10:49:17.095312   17853 start.go:293] postStartSetup for "addons-454931" (driver="docker")
	I0819 10:49:17.095322   17853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:49:17.095382   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:49:17.095415   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.112794   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.202240   17853 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:49:17.205476   17853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 10:49:17.205510   17853 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 10:49:17.205518   17853 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 10:49:17.205527   17853 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 10:49:17.205541   17853 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-9624/.minikube/addons for local assets ...
	I0819 10:49:17.205598   17853 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-9624/.minikube/files for local assets ...
	I0819 10:49:17.205621   17853 start.go:296] duration metric: took 110.304672ms for postStartSetup
	I0819 10:49:17.205925   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:17.222314   17853 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/config.json ...
	I0819 10:49:17.222560   17853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:49:17.222611   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.239054   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.322313   17853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 10:49:17.326437   17853 start.go:128] duration metric: took 14.683475473s to createHost
	I0819 10:49:17.326464   17853 start.go:83] releasing machines lock for "addons-454931", held for 14.683609595s
	I0819 10:49:17.326527   17853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454931
	I0819 10:49:17.343280   17853 ssh_runner.go:195] Run: cat /version.json
	I0819 10:49:17.343326   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.343396   17853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:49:17.343469   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:17.361821   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.363094   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:17.445191   17853 ssh_runner.go:195] Run: systemctl --version
	I0819 10:49:17.535776   17853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 10:49:17.670779   17853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:49:17.674691   17853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:49:17.691738   17853 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 10:49:17.691805   17853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:49:17.717461   17853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 10:49:17.717482   17853 start.go:495] detecting cgroup driver to use...
	I0819 10:49:17.717510   17853 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 10:49:17.717552   17853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:49:17.731446   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:49:17.741948   17853 docker.go:217] disabling cri-docker service (if available) ...
	I0819 10:49:17.742007   17853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 10:49:17.754575   17853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 10:49:17.767074   17853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 10:49:17.846360   17853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 10:49:17.926806   17853 docker.go:233] disabling docker service ...
	I0819 10:49:17.926864   17853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 10:49:17.943462   17853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 10:49:17.954046   17853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 10:49:18.028523   17853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 10:49:18.107524   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 10:49:18.118027   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:49:18.133535   17853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 10:49:18.133600   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.142715   17853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 10:49:18.142771   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.152104   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.161245   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.170270   17853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:49:18.178227   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.187053   17853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.201120   17853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:49:18.210546   17853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:49:18.218431   17853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:49:18.226302   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:18.306342   17853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 10:49:18.400066   17853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 10:49:18.400134   17853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 10:49:18.403489   17853 start.go:563] Will wait 60s for crictl version
	I0819 10:49:18.403551   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:49:18.406626   17853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:49:18.439196   17853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 10:49:18.439293   17853 ssh_runner.go:195] Run: crio --version
	I0819 10:49:18.472753   17853 ssh_runner.go:195] Run: crio --version
	I0819 10:49:18.510707   17853 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 10:49:18.512095   17853 cli_runner.go:164] Run: docker network inspect addons-454931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 10:49:18.529015   17853 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 10:49:18.532585   17853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:49:18.542912   17853 kubeadm.go:883] updating cluster {Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:49:18.543046   17853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:49:18.543108   17853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:49:18.605785   17853 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 10:49:18.605810   17853 crio.go:433] Images already preloaded, skipping extraction
	I0819 10:49:18.605863   17853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:49:18.637324   17853 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 10:49:18.637349   17853 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:49:18.637357   17853 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 10:49:18.637454   17853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-454931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:49:18.637515   17853 ssh_runner.go:195] Run: crio config
	I0819 10:49:18.679671   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:49:18.679694   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:49:18.679706   17853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:49:18.679733   17853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-454931 NodeName:addons-454931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:49:18.679869   17853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-454931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:49:18.679925   17853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:49:18.687843   17853 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:49:18.687903   17853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 10:49:18.695343   17853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 10:49:18.711270   17853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:49:18.728434   17853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 10:49:18.745124   17853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 10:49:18.748458   17853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:49:18.758830   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:18.830101   17853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:49:18.842882   17853 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931 for IP: 192.168.49.2
	I0819 10:49:18.842913   17853 certs.go:194] generating shared ca certs ...
	I0819 10:49:18.842933   17853 certs.go:226] acquiring lock for ca certs: {Name:mk48fd67c854a9bf925bf664f1df64b0d0b4b6de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.843057   17853 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key
	I0819 10:49:18.961901   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt ...
	I0819 10:49:18.961935   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt: {Name:mkc761c5afb6179bb50a06240c218cbbe834c8c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.962102   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key ...
	I0819 10:49:18.962113   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key: {Name:mk4046bc8960e7e057b5e1ebdc87ccbaa32a3d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:18.962183   17853 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key
	I0819 10:49:19.135757   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt ...
	I0819 10:49:19.135787   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt: {Name:mkf9aa29e8bda76d7d88fcbbc0888bf849fca9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.135941   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key ...
	I0819 10:49:19.135951   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key: {Name:mk0cad2e13f88585bf00aaffca31c72edb515c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.136015   17853 certs.go:256] generating profile certs ...
	I0819 10:49:19.136069   17853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key
	I0819 10:49:19.136084   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt with IP's: []
	I0819 10:49:19.193754   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt ...
	I0819 10:49:19.193788   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: {Name:mkbc2b63e57cbe75f518a16c0ee9d186632674dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.193961   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key ...
	I0819 10:49:19.193973   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.key: {Name:mk5558469fc09392c82d41d3442a677656aeff7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.194057   17853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190
	I0819 10:49:19.194078   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 10:49:19.313968   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 ...
	I0819 10:49:19.313999   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190: {Name:mkff3fe531f5c3cd481e431a22a5c83a62be088e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.314168   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190 ...
	I0819 10:49:19.314182   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190: {Name:mkbe31936aac126a7f0346838926215893f0d8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.314251   17853 certs.go:381] copying /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt.5ebde190 -> /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt
	I0819 10:49:19.314319   17853 certs.go:385] copying /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key.5ebde190 -> /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key
	I0819 10:49:19.314364   17853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key
	I0819 10:49:19.314380   17853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt with IP's: []
	I0819 10:49:19.423062   17853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt ...
	I0819 10:49:19.423092   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt: {Name:mk655c9d95fbfb730e1315e8ac055f617ce08e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.423244   17853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key ...
	I0819 10:49:19.423254   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key: {Name:mka4c73dffc73321d54f0c5421c73ff065a9d0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:19.423411   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:49:19.423447   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:49:19.423471   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:49:19.423494   17853 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-9624/.minikube/certs/key.pem (1679 bytes)
	I0819 10:49:19.424091   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:49:19.446828   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:49:19.467453   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:49:19.489688   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 10:49:19.511929   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 10:49:19.534249   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:49:19.558476   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:49:19.580188   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:49:19.601770   17853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:49:19.623799   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:49:19.640747   17853 ssh_runner.go:195] Run: openssl version
	I0819 10:49:19.645867   17853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:49:19.654554   17853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.657876   17853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.657934   17853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:49:19.664234   17853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:49:19.672838   17853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:49:19.676060   17853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:49:19.676119   17853 kubeadm.go:392] StartCluster: {Name:addons-454931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-454931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:49:19.676193   17853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 10:49:19.676235   17853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 10:49:19.707766   17853 cri.go:89] found id: ""
	I0819 10:49:19.707836   17853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:49:19.715505   17853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:49:19.723310   17853 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 10:49:19.723364   17853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:49:19.731302   17853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:49:19.731319   17853 kubeadm.go:157] found existing configuration files:
	
	I0819 10:49:19.731359   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:49:19.739239   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:49:19.739291   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:49:19.747064   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:49:19.754814   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:49:19.754863   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:49:19.762429   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:49:19.770186   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:49:19.770252   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:49:19.778808   17853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:49:19.786630   17853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:49:19.786680   17853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:49:19.794074   17853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 10:49:19.829715   17853 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:49:19.829810   17853 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:49:19.845029   17853 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 10:49:19.845110   17853 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 10:49:19.845168   17853 kubeadm.go:310] OS: Linux
	I0819 10:49:19.845225   17853 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 10:49:19.845300   17853 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 10:49:19.845368   17853 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 10:49:19.845450   17853 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 10:49:19.845496   17853 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 10:49:19.845580   17853 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 10:49:19.845676   17853 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 10:49:19.845730   17853 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 10:49:19.845779   17853 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 10:49:19.893054   17853 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:49:19.893177   17853 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:49:19.893275   17853 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:49:19.899154   17853 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:49:19.902076   17853 out.go:235]   - Generating certificates and keys ...
	I0819 10:49:19.902193   17853 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:49:19.902283   17853 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:49:20.090446   17853 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:49:20.288823   17853 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:49:20.596534   17853 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:49:20.864187   17853 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:49:21.037487   17853 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:49:21.037670   17853 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-454931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 10:49:21.211151   17853 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:49:21.211284   17853 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-454931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 10:49:21.288446   17853 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:49:21.482035   17853 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:49:21.615953   17853 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:49:21.616032   17853 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:49:21.961107   17853 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:49:22.049138   17853 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:49:22.119968   17853 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:49:22.221399   17853 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:49:22.341876   17853 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:49:22.342398   17853 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:49:22.344934   17853 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:49:22.347037   17853 out.go:235]   - Booting up control plane ...
	I0819 10:49:22.347142   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:49:22.347291   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:49:22.347432   17853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:49:22.356614   17853 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:49:22.361734   17853 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:49:22.361810   17853 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:49:22.443967   17853 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:49:22.444141   17853 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:49:22.945559   17853 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.647834ms
	I0819 10:49:22.945683   17853 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:49:27.447693   17853 kubeadm.go:310] [api-check] The API server is healthy after 4.502072204s
	I0819 10:49:27.458316   17853 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:49:27.471421   17853 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:49:27.489735   17853 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:49:27.489946   17853 kubeadm.go:310] [mark-control-plane] Marking the node addons-454931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:49:27.496877   17853 kubeadm.go:310] [bootstrap-token] Using token: drl235.cjwdnkfrhgh3xdmw
	I0819 10:49:27.498347   17853 out.go:235]   - Configuring RBAC rules ...
	I0819 10:49:27.498495   17853 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:49:27.501602   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:49:27.508849   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:49:27.511182   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:49:27.514009   17853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:49:27.516213   17853 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:49:27.854495   17853 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:49:28.274928   17853 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:49:28.853496   17853 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:49:28.854362   17853 kubeadm.go:310] 
	I0819 10:49:28.854421   17853 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:49:28.854432   17853 kubeadm.go:310] 
	I0819 10:49:28.854507   17853 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:49:28.854523   17853 kubeadm.go:310] 
	I0819 10:49:28.854553   17853 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:49:28.854607   17853 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:49:28.854649   17853 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:49:28.854660   17853 kubeadm.go:310] 
	I0819 10:49:28.854701   17853 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:49:28.854707   17853 kubeadm.go:310] 
	I0819 10:49:28.854748   17853 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:49:28.854754   17853 kubeadm.go:310] 
	I0819 10:49:28.854793   17853 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:49:28.854856   17853 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:49:28.854916   17853 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:49:28.854923   17853 kubeadm.go:310] 
	I0819 10:49:28.854990   17853 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:49:28.855084   17853 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:49:28.855106   17853 kubeadm.go:310] 
	I0819 10:49:28.855223   17853 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token drl235.cjwdnkfrhgh3xdmw \
	I0819 10:49:28.855323   17853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7ac81bd34d5e0dd4c745e6e1049376f9105cbd830050f6d1cbc53a7018b4d10a \
	I0819 10:49:28.855348   17853 kubeadm.go:310] 	--control-plane 
	I0819 10:49:28.855355   17853 kubeadm.go:310] 
	I0819 10:49:28.855425   17853 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:49:28.855433   17853 kubeadm.go:310] 
	I0819 10:49:28.855508   17853 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token drl235.cjwdnkfrhgh3xdmw \
	I0819 10:49:28.855599   17853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7ac81bd34d5e0dd4c745e6e1049376f9105cbd830050f6d1cbc53a7018b4d10a 
	I0819 10:49:28.857649   17853 kubeadm.go:310] W0819 10:49:19.827206    1293 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:49:28.857929   17853 kubeadm.go:310] W0819 10:49:19.827792    1293 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:49:28.858111   17853 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 10:49:28.858200   17853 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:49:28.858223   17853 cni.go:84] Creating CNI manager for ""
	I0819 10:49:28.858230   17853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:49:28.860078   17853 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:49:28.861210   17853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:49:28.865230   17853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:49:28.865245   17853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:49:28.882605   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:49:29.077337   17853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:49:29.077400   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:29.077448   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-454931 minikube.k8s.io/updated_at=2024_08_19T10_49_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=addons-454931 minikube.k8s.io/primary=true
	I0819 10:49:29.085752   17853 ops.go:34] apiserver oom_adj: -16
	I0819 10:49:29.172000   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:29.672271   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:30.172383   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:30.672160   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:31.172075   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:31.672372   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.172709   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.673044   17853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:49:32.737833   17853 kubeadm.go:1113] duration metric: took 3.660490147s to wait for elevateKubeSystemPrivileges
	I0819 10:49:32.737867   17853 kubeadm.go:394] duration metric: took 13.061753483s to StartCluster
	I0819 10:49:32.737883   17853 settings.go:142] acquiring lock: {Name:mka0415b2b44df4b87df0b554c885fde1a08273f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:32.737982   17853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:49:32.738313   17853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/kubeconfig: {Name:mk5e1f8a598926e7f378554b3f9ff1e342d2d455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:49:32.738487   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:49:32.738507   17853 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:49:32.738589   17853 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 10:49:32.738685   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:32.738693   17853 addons.go:69] Setting helm-tiller=true in profile "addons-454931"
	I0819 10:49:32.738710   17853 addons.go:69] Setting volumesnapshots=true in profile "addons-454931"
	I0819 10:49:32.738709   17853 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-454931"
	I0819 10:49:32.738730   17853 addons.go:234] Setting addon helm-tiller=true in "addons-454931"
	I0819 10:49:32.738690   17853 addons.go:69] Setting yakd=true in profile "addons-454931"
	I0819 10:49:32.738735   17853 addons.go:234] Setting addon volumesnapshots=true in "addons-454931"
	I0819 10:49:32.738738   17853 addons.go:69] Setting registry=true in profile "addons-454931"
	I0819 10:49:32.738750   17853 addons.go:234] Setting addon yakd=true in "addons-454931"
	I0819 10:49:32.738805   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738702   17853 addons.go:69] Setting volcano=true in profile "addons-454931"
	I0819 10:49:32.738892   17853 addons.go:234] Setting addon volcano=true in "addons-454931"
	I0819 10:49:32.738931   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738717   17853 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-454931"
	I0819 10:49:32.739021   17853 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-454931"
	I0819 10:49:32.738756   17853 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-454931"
	I0819 10:49:32.739106   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738757   17853 addons.go:234] Setting addon registry=true in "addons-454931"
	I0819 10:49:32.739211   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739301   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738739   17853 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-454931"
	I0819 10:49:32.739392   17853 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-454931"
	I0819 10:49:32.738765   17853 addons.go:69] Setting storage-provisioner=true in profile "addons-454931"
	I0819 10:49:32.739415   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739419   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739429   17853 addons.go:234] Setting addon storage-provisioner=true in "addons-454931"
	I0819 10:49:32.739451   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.739528   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739624   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739854   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739908   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738765   17853 addons.go:69] Setting cloud-spanner=true in profile "addons-454931"
	I0819 10:49:32.740210   17853 addons.go:234] Setting addon cloud-spanner=true in "addons-454931"
	I0819 10:49:32.740238   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738767   17853 addons.go:69] Setting ingress=true in profile "addons-454931"
	I0819 10:49:32.740349   17853 addons.go:234] Setting addon ingress=true in "addons-454931"
	I0819 10:49:32.740394   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.740697   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.740953   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738769   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.741266   17853 out.go:177] * Verifying Kubernetes components...
	I0819 10:49:32.738769   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.742041   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738775   17853 addons.go:69] Setting default-storageclass=true in profile "addons-454931"
	I0819 10:49:32.742665   17853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-454931"
	I0819 10:49:32.742821   17853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:49:32.738776   17853 addons.go:69] Setting gcp-auth=true in profile "addons-454931"
	I0819 10:49:32.743010   17853 mustload.go:65] Loading cluster: addons-454931
	I0819 10:49:32.743189   17853 config.go:182] Loaded profile config "addons-454931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:49:32.743350   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.743424   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.743717   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738777   17853 addons.go:69] Setting ingress-dns=true in profile "addons-454931"
	I0819 10:49:32.746958   17853 addons.go:234] Setting addon ingress-dns=true in "addons-454931"
	I0819 10:49:32.747035   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.738778   17853 addons.go:69] Setting inspektor-gadget=true in profile "addons-454931"
	I0819 10:49:32.747312   17853 addons.go:234] Setting addon inspektor-gadget=true in "addons-454931"
	I0819 10:49:32.747356   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.747925   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.738781   17853 addons.go:69] Setting metrics-server=true in profile "addons-454931"
	I0819 10:49:32.752833   17853 addons.go:234] Setting addon metrics-server=true in "addons-454931"
	I0819 10:49:32.752906   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.753430   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.739336   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.776446   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	W0819 10:49:32.781962   17853 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 10:49:32.788438   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 10:49:32.800719   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 10:49:32.803818   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.803872   17853 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-454931"
	I0819 10:49:32.803933   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.804401   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.804576   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 10:49:32.804661   17853 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 10:49:32.804686   17853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:49:32.804703   17853 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 10:49:32.818961   17853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:49:32.819013   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:49:32.819069   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.817195   17853 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 10:49:32.819141   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 10:49:32.819204   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.820908   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 10:49:32.820949   17853 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 10:49:32.820911   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 10:49:32.821003   17853 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 10:49:32.821497   17853 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 10:49:32.821093   17853 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:49:32.822239   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 10:49:32.822298   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.823084   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:32.823138   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 10:49:32.823157   17853 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 10:49:32.823240   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.824541   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 10:49:32.824656   17853 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 10:49:32.824818   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 10:49:32.824849   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 10:49:32.824935   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.825519   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:32.825846   17853 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 10:49:32.825863   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 10:49:32.825910   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.827023   17853 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:49:32.827040   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 10:49:32.827272   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.827496   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 10:49:32.828152   17853 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 10:49:32.829240   17853 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:49:32.829254   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 10:49:32.829319   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.830965   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 10:49:32.832438   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 10:49:32.833740   17853 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 10:49:32.836984   17853 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 10:49:32.837006   17853 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 10:49:32.837074   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.837613   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 10:49:32.837650   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 10:49:32.837700   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.837725   17853 addons.go:234] Setting addon default-storageclass=true in "addons-454931"
	I0819 10:49:32.837763   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:32.838236   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:32.841013   17853 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 10:49:32.842093   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 10:49:32.842116   17853 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 10:49:32.842188   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.871660   17853 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 10:49:32.872826   17853 out.go:177]   - Using image docker.io/busybox:stable
	I0819 10:49:32.873925   17853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:49:32.873950   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 10:49:32.874008   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.878762   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:49:32.881465   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.885684   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.892993   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.894643   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.899663   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.905896   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.907691   17853 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 10:49:32.908831   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 10:49:32.908860   17853 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 10:49:32.908931   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.910373   17853 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:49:32.910403   17853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:49:32.910455   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:32.913144   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.915750   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.916321   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.917187   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.920051   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.924893   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.933176   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:32.934129   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	W0819 10:49:32.958908   17853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 10:49:32.958943   17853 retry.go:31] will retry after 294.032151ms: ssh: handshake failed: EOF
	W0819 10:49:32.958994   17853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 10:49:32.959016   17853 retry.go:31] will retry after 328.164025ms: ssh: handshake failed: EOF
	I0819 10:49:32.967023   17853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:49:33.355168   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 10:49:33.355217   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 10:49:33.369962   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 10:49:33.370047   17853 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 10:49:33.375199   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:49:33.454377   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 10:49:33.454517   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:49:33.455666   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 10:49:33.455735   17853 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 10:49:33.456448   17853 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 10:49:33.456500   17853 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 10:49:33.468918   17853 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 10:49:33.469004   17853 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 10:49:33.479312   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 10:49:33.479367   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 10:49:33.566488   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 10:49:33.566566   17853 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 10:49:33.570034   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 10:49:33.570114   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 10:49:33.571163   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:49:33.574797   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 10:49:33.574826   17853 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 10:49:33.575268   17853 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 10:49:33.575290   17853 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 10:49:33.655187   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:49:33.659896   17853 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:49:33.659926   17853 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 10:49:33.755220   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 10:49:33.755251   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 10:49:33.757082   17853 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:49:33.757106   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 10:49:33.765747   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 10:49:33.765782   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 10:49:33.858176   17853 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:49:33.858203   17853 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 10:49:33.874640   17853 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 10:49:33.874670   17853 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 10:49:33.955886   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:49:33.956814   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:49:34.054264   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 10:49:34.054293   17853 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 10:49:34.055839   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:49:34.056304   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:49:34.066549   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 10:49:34.066592   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 10:49:34.077710   17853 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 10:49:34.077743   17853 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 10:49:34.165193   17853 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 10:49:34.165224   17853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 10:49:34.354270   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:49:34.355349   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 10:49:34.355431   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 10:49:34.455229   17853 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:49:34.455276   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 10:49:34.459874   17853 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 10:49:34.459956   17853 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 10:49:34.461960   17853 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.583164802s)
	I0819 10:49:34.462028   17853 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 10:49:34.463125   17853 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.496073021s)
	I0819 10:49:34.464022   17853 node_ready.go:35] waiting up to 6m0s for node "addons-454931" to be "Ready" ...
	I0819 10:49:34.564334   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 10:49:34.564422   17853 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 10:49:34.757395   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:49:34.761407   17853 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 10:49:34.761497   17853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 10:49:34.855381   17853 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:34.855409   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 10:49:34.858937   17853 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 10:49:34.859024   17853 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	W0819 10:49:34.968939   17853 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-454931" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0819 10:49:34.968969   17853 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0819 10:49:35.355371   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 10:49:35.355468   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 10:49:35.356436   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:35.467109   17853 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:49:35.467136   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 10:49:35.556350   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 10:49:35.556383   17853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 10:49:35.962165   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:49:35.977541   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 10:49:35.977573   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 10:49:36.269135   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 10:49:36.269226   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 10:49:36.564891   17853 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:49:36.564964   17853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 10:49:36.578432   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:36.858242   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:49:37.176316   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.721899924s)
	I0819 10:49:37.176443   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.721905101s)
	I0819 10:49:37.176663   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801379163s)
	I0819 10:49:37.660644   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.089405908s)
	I0819 10:49:37.660772   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.005546023s)
	I0819 10:49:37.660855   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.704936897s)
	W0819 10:49:37.861070   17853 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 10:49:37.957891   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.001033729s)
	I0819 10:49:37.957936   17853 addons.go:475] Verifying addon metrics-server=true in "addons-454931"
	I0819 10:49:38.970653   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:39.364596   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.30868271s)
	I0819 10:49:39.364666   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.308276546s)
	I0819 10:49:39.364685   17853 addons.go:475] Verifying addon ingress=true in "addons-454931"
	I0819 10:49:39.364694   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.010324286s)
	I0819 10:49:39.364704   17853 addons.go:475] Verifying addon registry=true in "addons-454931"
	I0819 10:49:39.364765   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.607282971s)
	I0819 10:49:39.366309   17853 out.go:177] * Verifying ingress addon...
	I0819 10:49:39.366311   17853 out.go:177] * Verifying registry addon...
	I0819 10:49:39.366310   17853 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-454931 service yakd-dashboard -n yakd-dashboard
	
	I0819 10:49:39.368710   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 10:49:39.369531   17853 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 10:49:39.376434   17853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 10:49:39.376455   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:39.376710   17853 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 10:49:39.376729   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:39.872871   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:39.873322   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:40.054582   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.09237239s)
	I0819 10:49:40.054658   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.697932367s)
	W0819 10:49:40.054735   17853 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:49:40.054764   17853 retry.go:31] will retry after 334.320597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:49:40.058043   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 10:49:40.058122   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:40.087684   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:40.371822   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:40.373272   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:40.375370   17853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 10:49:40.390195   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:49:40.396035   17853 addons.go:234] Setting addon gcp-auth=true in "addons-454931"
	I0819 10:49:40.396133   17853 host.go:66] Checking if "addons-454931" exists ...
	I0819 10:49:40.396639   17853 cli_runner.go:164] Run: docker container inspect addons-454931 --format={{.State.Status}}
	I0819 10:49:40.413145   17853 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 10:49:40.413198   17853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454931
	I0819 10:49:40.429933   17853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/addons-454931/id_rsa Username:docker}
	I0819 10:49:40.773077   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.914730891s)
	I0819 10:49:40.773120   17853 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-454931"
	I0819 10:49:40.774276   17853 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 10:49:40.776206   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 10:49:40.782683   17853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 10:49:40.782703   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:40.872611   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:40.873453   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:41.280442   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:41.380492   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:41.380895   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:41.392804   17853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.002559864s)
	I0819 10:49:41.394210   17853 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:49:41.395416   17853 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 10:49:41.396725   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 10:49:41.396743   17853 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 10:49:41.413454   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 10:49:41.413479   17853 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 10:49:41.429238   17853 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:49:41.429259   17853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 10:49:41.445063   17853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:49:41.467991   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:41.775472   17853 addons.go:475] Verifying addon gcp-auth=true in "addons-454931"
	I0819 10:49:41.776787   17853 out.go:177] * Verifying gcp-auth addon...
	I0819 10:49:41.778774   17853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 10:49:41.779473   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:41.880558   17853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 10:49:41.880582   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:41.880564   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:41.881007   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:42.279654   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:42.281289   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:42.372277   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:42.373500   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:42.780122   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:42.782550   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:42.874177   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:42.875421   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.280170   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:43.282607   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:43.372438   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:43.372890   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.779736   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:43.781187   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:43.871904   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:43.873288   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:43.969010   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:44.280044   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:44.281867   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:44.372657   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:44.373305   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:44.779275   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:44.781573   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:44.872615   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:44.872999   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:45.279459   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:45.281129   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:45.371696   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:45.372643   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:45.779475   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:45.781034   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:45.871751   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:45.872492   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:46.279322   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:46.280786   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:46.372476   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:46.372726   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:46.466673   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:46.779941   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:46.781192   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:46.880715   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:46.881432   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:47.279626   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:47.281214   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:47.371906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:47.372873   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:47.780234   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:47.781524   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:47.872011   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:47.872505   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:48.279356   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:48.281050   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:48.371672   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:48.372736   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:48.466841   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:48.780022   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:48.781257   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:48.872000   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:48.872866   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:49.280224   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:49.281556   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:49.372122   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:49.372556   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:49.779355   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:49.781003   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:49.871197   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:49.872409   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:50.279595   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:50.281118   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:50.372006   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:50.372939   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:50.466908   17853 node_ready.go:53] node "addons-454931" has status "Ready":"False"
	I0819 10:49:50.779876   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:50.781272   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:50.871854   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:50.872754   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.280026   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:51.281377   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:51.371964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:51.372292   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.779582   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:51.781813   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:51.872413   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:51.872969   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:51.972295   17853 node_ready.go:49] node "addons-454931" has status "Ready":"True"
	I0819 10:49:51.972322   17853 node_ready.go:38] duration metric: took 17.508245592s for node "addons-454931" to be "Ready" ...
	I0819 10:49:51.972333   17853 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:49:51.987266   17853 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:52.281364   17853 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 10:49:52.281391   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:52.282637   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:52.375857   17853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 10:49:52.375890   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:52.376507   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:52.781915   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:52.783684   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:52.882882   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:52.883089   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.281273   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:53.281286   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:53.381031   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:53.381207   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.492741   17853 pod_ready.go:93] pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.492766   17853 pod_ready.go:82] duration metric: took 1.505464081s for pod "coredns-6f6b679f8f-4lg4p" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.492776   17853 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.497456   17853 pod_ready.go:93] pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.497480   17853 pod_ready.go:82] duration metric: took 4.697892ms for pod "coredns-6f6b679f8f-hrnrm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.497498   17853 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.502040   17853 pod_ready.go:93] pod "etcd-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.502072   17853 pod_ready.go:82] duration metric: took 4.566739ms for pod "etcd-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.502092   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.506745   17853 pod_ready.go:93] pod "kube-apiserver-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.506768   17853 pod_ready.go:82] duration metric: took 4.668906ms for pod "kube-apiserver-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.506780   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.567946   17853 pod_ready.go:93] pod "kube-controller-manager-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.567968   17853 pod_ready.go:82] duration metric: took 61.181375ms for pod "kube-controller-manager-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.567981   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dmbm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.781581   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:53.781763   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:53.872888   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:53.873261   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:53.967287   17853 pod_ready.go:93] pod "kube-proxy-8dmbm" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:53.967324   17853 pod_ready.go:82] duration metric: took 399.337816ms for pod "kube-proxy-8dmbm" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:53.967344   17853 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.281496   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:54.281862   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:54.367276   17853 pod_ready.go:93] pod "kube-scheduler-addons-454931" in "kube-system" namespace has status "Ready":"True"
	I0819 10:49:54.367300   17853 pod_ready.go:82] duration metric: took 399.948456ms for pod "kube-scheduler-addons-454931" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.367311   17853 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-w697b" in "kube-system" namespace to be "Ready" ...
	I0819 10:49:54.373254   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:54.373887   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:54.780800   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:54.781987   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:54.872060   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:54.873092   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:55.280997   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:55.281319   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:55.381627   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:55.382253   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:55.781627   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:55.782805   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:55.871898   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:55.873829   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:56.281253   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:56.282177   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:56.372681   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:56.373512   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:49:56.374336   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:56.781020   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:56.781673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:56.880042   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:56.880503   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:57.281302   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:57.281474   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:57.382112   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:57.382249   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:57.780731   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:57.781981   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:57.871599   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:57.872793   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:58.281315   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:58.281755   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:58.371558   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:58.372486   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:58.781208   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:58.781771   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:58.872175   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:49:58.872297   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:58.872534   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:59.280605   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:59.281708   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:59.373345   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:49:59.373912   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:59.781804   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:49:59.781940   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:49:59.871973   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:49:59.873246   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.281806   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:00.282475   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:00.371920   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:00.373554   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.780589   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:00.781068   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:00.871774   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:00.873288   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:00.873985   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:01.281226   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:01.282679   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:01.372551   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:01.373085   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:01.781470   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:01.784219   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:01.873574   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:01.875214   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:02.280648   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:02.281549   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:02.387435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:02.388221   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:02.780435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:02.781535   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:02.871826   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:02.873201   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:03.281805   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:03.283190   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:03.372185   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:03.373319   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:03.374009   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:03.782026   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:03.782315   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:03.871888   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:03.873652   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:04.281303   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:04.281811   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:04.371641   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:04.372602   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:04.780681   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:04.781858   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:04.871601   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:04.872948   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.281074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:05.281879   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:05.371821   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:05.373417   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.780916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:05.781413   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:05.872379   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:05.873158   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:05.873563   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:06.280659   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:06.281545   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:06.372421   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:06.373033   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:06.780573   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:06.781562   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:06.872906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:06.873059   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:07.281083   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:07.281400   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:07.372558   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:07.372968   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:07.780783   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:07.781574   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:07.872775   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:07.873232   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:08.280855   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:08.281677   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:08.372709   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:08.372741   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:08.373395   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:08.780598   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:08.781623   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:08.872878   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:08.873096   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:09.281742   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:09.282210   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:09.372414   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:09.375916   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:09.781412   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:09.783765   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:09.873607   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:09.875022   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:10.355342   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:10.356498   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:10.461029   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:10.463063   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:10.470623   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:10.857700   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:10.859749   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:10.873195   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:10.875703   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:11.282121   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:11.282926   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:11.371642   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:11.374767   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:11.781541   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:11.782264   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:11.872179   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:11.873224   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.281600   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:12.282952   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:12.372661   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:12.374203   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.781373   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:12.781794   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:12.872074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:12.873453   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:12.874088   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:13.282725   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:13.284144   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:13.372638   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:13.372761   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:13.780771   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:13.781306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:13.872727   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:13.874142   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:14.280906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:14.281852   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:14.371699   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:14.373308   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:14.780494   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:14.781710   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:14.873131   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:14.873541   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:15.281673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:15.281821   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:15.373179   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:15.382662   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:15.383013   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:15.781657   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:15.782052   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:15.872074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:15.873133   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:16.280998   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:16.281918   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:16.371662   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:16.373264   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:16.781973   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:16.782362   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:16.872173   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:16.873263   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:17.281806   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:17.281980   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:17.371820   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:17.373169   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:17.781296   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:17.781756   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:17.871746   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:17.872706   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:17.872707   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:18.281145   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:18.281741   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:18.373036   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:18.373513   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:18.781768   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:18.782140   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:18.881761   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:18.882315   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.281474   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:19.281727   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:19.372105   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:19.373222   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.781370   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:19.781568   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:19.872991   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:19.873342   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:19.873882   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:20.281143   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:20.282050   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:20.371789   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:20.373269   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:20.780593   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:20.781769   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:20.871999   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:20.872482   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:21.280499   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:21.281616   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:21.372472   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:21.372789   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:21.780369   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:21.781306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:21.871906   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:21.873142   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:22.279825   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:22.281197   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:22.371915   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:22.373276   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:22.373743   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:22.780095   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:22.780799   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:22.872682   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:22.873374   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:23.281368   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:23.281589   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:23.372537   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:23.373478   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:23.782599   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:23.784929   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:23.872085   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:23.873033   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:24.280892   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:24.281550   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:24.373263   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:24.374216   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:24.781423   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:24.781676   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:24.873620   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:24.873757   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:24.873884   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:25.281410   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:25.281916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:25.371552   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:25.372690   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:25.780483   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:25.781281   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:25.872146   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:25.873822   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:26.281211   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:26.281691   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:26.373530   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:26.373934   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:26.780711   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:26.781250   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:26.871715   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:26.872736   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:27.280698   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:27.281350   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:27.372131   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:27.372437   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:27.372802   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:27.780665   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:27.781610   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:27.872165   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:27.872573   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:28.280785   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:28.281539   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:28.372597   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:28.372832   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:28.779949   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:28.780910   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:28.871306   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:28.872455   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:29.281502   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:29.281586   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:29.372910   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:29.373315   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:29.373400   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:29.781673   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:29.781878   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:29.881245   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:29.881557   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:30.280201   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:30.280928   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:30.371872   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:50:30.373152   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:30.781416   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:30.781421   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:30.872106   17853 kapi.go:107] duration metric: took 51.503395032s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 10:50:30.872783   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:31.281232   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:31.282145   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:31.373618   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:31.781449   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:31.781745   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:31.871887   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:31.872499   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:32.280789   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:32.281845   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:32.372852   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:32.780637   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:32.781631   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:32.872926   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:33.280499   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:33.281365   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:33.373313   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:33.781671   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:33.781791   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:33.872604   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:33.872981   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:34.283049   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:34.283473   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:34.373134   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:34.780135   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:34.780837   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:34.880048   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:35.280531   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:35.281244   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:35.373452   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:35.781322   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:35.781495   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:35.873105   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:35.873606   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:36.341632   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:36.342476   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:36.373623   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:36.781418   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:36.781839   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:36.873176   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.282167   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:37.283438   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:37.382321   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.780708   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:37.781384   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:37.873473   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:37.874354   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:38.281792   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:38.283346   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:38.373933   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:38.781391   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:38.781487   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:38.873802   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:39.280370   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:39.282215   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:39.383564   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:39.781307   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:39.781319   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:39.873652   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:40.281153   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:40.281722   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:40.374002   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:40.375110   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:40.780414   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:40.781064   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:40.874743   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:41.279736   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:41.281908   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:41.373868   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:41.779726   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:41.781974   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:41.873428   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:42.281363   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:42.281678   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:42.372724   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:42.781180   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:42.782269   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:42.873698   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:42.874991   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:43.280556   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:43.281356   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:43.373927   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:43.781040   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:43.781728   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:43.881402   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:44.280677   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:44.281323   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:44.373581   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:44.781697   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:44.781962   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:44.873430   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:45.280877   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:45.281535   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:45.372307   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:45.372905   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:45.779925   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:45.782064   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:45.874946   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:46.279964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:46.281934   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:46.373323   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:46.781967   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:46.782324   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:46.872990   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:47.281150   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:47.281420   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:47.377625   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:47.381368   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:47.810435   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:47.810946   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:47.912465   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:48.280762   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:48.281287   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:48.373039   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:48.782084   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:48.783310   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:48.874856   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.281387   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:49.283916   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:49.374186   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.780869   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:49.782101   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:49.873372   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:49.874090   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:50.359575   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:50.360012   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:50.378684   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:50.780394   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:50.781492   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:50.873275   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:51.281578   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:51.281925   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:51.372988   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:51.781385   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:51.781722   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:51.873206   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:52.281579   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:52.282074   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:52.373554   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:52.374357   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:52.780519   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:52.781350   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:52.872913   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:53.281258   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:53.282388   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:53.373945   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:53.781225   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:53.781234   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:53.872964   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:54.280390   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:54.281631   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:54.373146   17853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:50:54.781431   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:54.781522   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:54.873312   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:54.873678   17853 kapi.go:107] duration metric: took 1m15.504147744s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 10:50:55.281724   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:55.282664   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:55.781305   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:55.881223   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:56.280958   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:56.281816   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:56.781153   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:56.781740   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:57.280617   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:57.281847   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:57.372827   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:57.780842   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:57.781411   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:58.281525   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:58.281964   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:58.779566   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:58.781723   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:59.280765   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:59.281116   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:50:59.372859   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:50:59.781404   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:50:59.781547   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:51:00.281375   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:00.281437   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:51:00.781591   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:00.781673   17853 kapi.go:107] duration metric: took 1m19.002901269s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 10:51:00.783043   17853 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-454931 cluster.
	I0819 10:51:00.784390   17853 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 10:51:00.785681   17853 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 10:51:01.280068   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:01.373399   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:01.780642   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:02.280930   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:02.781388   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:03.281216   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:03.373450   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:03.780237   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:04.280205   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:04.781151   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:05.280746   17853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:51:05.780567   17853 kapi.go:107] duration metric: took 1m25.004358745s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 10:51:05.782407   17853 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, metrics-server, helm-tiller, yakd, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 10:51:05.783567   17853 addons.go:510] duration metric: took 1m33.044978222s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin default-storageclass metrics-server helm-tiller yakd inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 10:51:05.872824   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:08.372865   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:10.373155   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:12.872481   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:14.873599   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:16.874667   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:19.372331   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:21.372565   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:23.372958   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:25.373010   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:27.871982   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:29.873323   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:32.372886   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:34.873426   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:36.875398   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:39.373815   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:41.872834   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:44.372596   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:46.372686   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:48.872097   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:50.872444   17853 pod_ready.go:103] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"False"
	I0819 10:51:52.872795   17853 pod_ready.go:93] pod "metrics-server-8988944d9-w697b" in "kube-system" namespace has status "Ready":"True"
	I0819 10:51:52.872818   17853 pod_ready.go:82] duration metric: took 1m58.50549999s for pod "metrics-server-8988944d9-w697b" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.872830   17853 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.877173   17853 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace has status "Ready":"True"
	I0819 10:51:52.877196   17853 pod_ready.go:82] duration metric: took 4.360181ms for pod "nvidia-device-plugin-daemonset-4xgtg" in "kube-system" namespace to be "Ready" ...
	I0819 10:51:52.877214   17853 pod_ready.go:39] duration metric: took 2m0.904868643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:51:52.877230   17853 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:51:52.877257   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:52.877314   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:52.914911   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:52.914938   17853 cri.go:89] found id: ""
	I0819 10:51:52.914948   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:52.915004   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.918448   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:52.918513   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:52.951922   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:52.951945   17853 cri.go:89] found id: ""
	I0819 10:51:52.951959   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:52.952017   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.955280   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:52.955339   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:52.989756   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:52.989779   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:52.989783   17853 cri.go:89] found id: ""
	I0819 10:51:52.989790   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:52.989845   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.993093   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:52.996208   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:52.996278   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:53.031761   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:53.031790   17853 cri.go:89] found id: ""
	I0819 10:51:53.031799   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:53.031845   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.035108   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:53.035189   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:53.068589   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:53.068616   17853 cri.go:89] found id: ""
	I0819 10:51:53.068625   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:53.068691   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.071994   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:53.072065   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:53.105767   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:53.105792   17853 cri.go:89] found id: ""
	I0819 10:51:53.105801   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:53.105862   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.109103   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:53.109168   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:53.143013   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:53.143038   17853 cri.go:89] found id: ""
	I0819 10:51:53.143047   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:53.143106   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:53.146616   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:51:53.146642   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:53.190231   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:51:53.190268   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:53.223493   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:51:53.223521   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:53.281859   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:51:53.281895   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:51:53.355149   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:51:53.355186   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:51:53.396722   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:53.396750   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:53.433556   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:53.433623   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:53.472654   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:51:53.472687   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:53.512392   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:53.512425   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:51:53.568456   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:51:53.568491   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:51:53.581351   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:51:53.581382   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:51:53.678182   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:51:53.678212   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:53.723799   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:51:53.723834   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.259755   17853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:51:56.273205   17853 api_server.go:72] duration metric: took 2m23.5346621s to wait for apiserver process to appear ...
	I0819 10:51:56.273228   17853 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:51:56.273263   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:56.273314   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:56.306898   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:56.306919   17853 cri.go:89] found id: ""
	I0819 10:51:56.306927   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:56.306986   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.310296   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:56.310350   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:56.343684   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:56.343711   17853 cri.go:89] found id: ""
	I0819 10:51:56.343719   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:56.343760   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.347064   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:56.347120   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:56.380312   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.380337   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:56.380342   17853 cri.go:89] found id: ""
	I0819 10:51:56.380349   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:56.380392   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.383690   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.386752   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:56.386816   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:56.418933   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:56.418956   17853 cri.go:89] found id: ""
	I0819 10:51:56.418964   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:56.419008   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.422291   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:56.422360   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:56.455813   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:56.455837   17853 cri.go:89] found id: ""
	I0819 10:51:56.455845   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:56.455885   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.459251   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:56.459324   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:56.492997   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:56.493020   17853 cri.go:89] found id: ""
	I0819 10:51:56.493028   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:56.493076   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.496459   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:56.496516   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:56.529763   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:56.529786   17853 cri.go:89] found id: ""
	I0819 10:51:56.529797   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:56.529849   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:56.533164   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:51:56.533190   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:51:56.634293   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:51:56.634330   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:56.669150   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:56.669182   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:56.710888   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:51:56.710921   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:56.743711   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:51:56.743736   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:56.798887   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:51:56.798929   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:51:56.876928   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:56.876968   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:51:56.930006   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:51:56.930041   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:51:56.942264   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:51:56.942293   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:56.987194   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:51:56.987226   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:57.030290   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:57.030319   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:57.066447   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:51:57.066482   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:57.106049   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:51:57.106084   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:51:59.648050   17853 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 10:51:59.651630   17853 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 10:51:59.652431   17853 api_server.go:141] control plane version: v1.31.0
	I0819 10:51:59.652452   17853 api_server.go:131] duration metric: took 3.379218933s to wait for apiserver health ...
	I0819 10:51:59.652460   17853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:51:59.652480   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:51:59.652526   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:51:59.686278   17853 cri.go:89] found id: "8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:51:59.686297   17853 cri.go:89] found id: ""
	I0819 10:51:59.686305   17853 logs.go:276] 1 containers: [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58]
	I0819 10:51:59.686346   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.689372   17853 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:51:59.689425   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:51:59.723001   17853 cri.go:89] found id: "5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:51:59.723022   17853 cri.go:89] found id: ""
	I0819 10:51:59.723031   17853 logs.go:276] 1 containers: [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933]
	I0819 10:51:59.723090   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.726444   17853 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:51:59.726520   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:51:59.759668   17853 cri.go:89] found id: "f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:51:59.759692   17853 cri.go:89] found id: "efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:59.759696   17853 cri.go:89] found id: ""
	I0819 10:51:59.759707   17853 logs.go:276] 2 containers: [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb]
	I0819 10:51:59.759768   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.763459   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.767030   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:51:59.767112   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:51:59.801139   17853 cri.go:89] found id: "7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:59.801160   17853 cri.go:89] found id: ""
	I0819 10:51:59.801168   17853 logs.go:276] 1 containers: [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590]
	I0819 10:51:59.801223   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.804661   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:51:59.804727   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:51:59.837183   17853 cri.go:89] found id: "548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:51:59.837202   17853 cri.go:89] found id: ""
	I0819 10:51:59.837208   17853 logs.go:276] 1 containers: [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5]
	I0819 10:51:59.837251   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.840821   17853 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:51:59.840876   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:51:59.875289   17853 cri.go:89] found id: "cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:51:59.875315   17853 cri.go:89] found id: ""
	I0819 10:51:59.875322   17853 logs.go:276] 1 containers: [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f]
	I0819 10:51:59.875365   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.878793   17853 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:51:59.878862   17853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:51:59.911885   17853 cri.go:89] found id: "a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:51:59.911909   17853 cri.go:89] found id: ""
	I0819 10:51:59.911919   17853 logs.go:276] 1 containers: [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a]
	I0819 10:51:59.911960   17853 ssh_runner.go:195] Run: which crictl
	I0819 10:51:59.915146   17853 logs.go:123] Gathering logs for coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] ...
	I0819 10:51:59.915170   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb"
	I0819 10:51:59.951104   17853 logs.go:123] Gathering logs for kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] ...
	I0819 10:51:59.951132   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590"
	I0819 10:51:59.989911   17853 logs.go:123] Gathering logs for kubelet ...
	I0819 10:51:59.989948   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:52:00.045238   17853 logs.go:123] Gathering logs for dmesg ...
	I0819 10:52:00.045289   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:52:00.058175   17853 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:52:00.058204   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:52:00.158288   17853 logs.go:123] Gathering logs for kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] ...
	I0819 10:52:00.158317   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58"
	I0819 10:52:00.204448   17853 logs.go:123] Gathering logs for etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] ...
	I0819 10:52:00.204495   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933"
	I0819 10:52:00.247754   17853 logs.go:123] Gathering logs for coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] ...
	I0819 10:52:00.247799   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b"
	I0819 10:52:00.284966   17853 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:52:00.284998   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:52:00.363031   17853 logs.go:123] Gathering logs for kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] ...
	I0819 10:52:00.363070   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5"
	I0819 10:52:00.396795   17853 logs.go:123] Gathering logs for kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] ...
	I0819 10:52:00.396818   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f"
	I0819 10:52:00.456396   17853 logs.go:123] Gathering logs for kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] ...
	I0819 10:52:00.456434   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a"
	I0819 10:52:00.498202   17853 logs.go:123] Gathering logs for container status ...
	I0819 10:52:00.498233   17853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:52:03.051974   17853 system_pods.go:59] 20 kube-system pods found
	I0819 10:52:03.052005   17853 system_pods.go:61] "coredns-6f6b679f8f-4lg4p" [68bbaa37-27c9-491f-9299-a9fbb8e3c6aa] Running
	I0819 10:52:03.052010   17853 system_pods.go:61] "coredns-6f6b679f8f-hrnrm" [6622e471-6bfe-4b7f-8472-c5fbc9a7a6aa] Running
	I0819 10:52:03.052014   17853 system_pods.go:61] "csi-hostpath-attacher-0" [55bd1bec-37db-4934-bf73-0fd7d404a31a] Running
	I0819 10:52:03.052018   17853 system_pods.go:61] "csi-hostpath-resizer-0" [bcacc22c-91ac-438f-9425-d9dee1d7f8e4] Running
	I0819 10:52:03.052021   17853 system_pods.go:61] "csi-hostpathplugin-dfmfz" [d62f85fe-9bf5-4f41-9f85-3657f60b6e20] Running
	I0819 10:52:03.052024   17853 system_pods.go:61] "etcd-addons-454931" [5df4cd50-b241-4d2d-8393-b1f5b8fdafc7] Running
	I0819 10:52:03.052027   17853 system_pods.go:61] "kindnet-82zcc" [60e4e9fc-e115-4f32-8217-740dd919dc7d] Running
	I0819 10:52:03.052030   17853 system_pods.go:61] "kube-apiserver-addons-454931" [22bdb559-bd55-4bb9-b545-0d6eec0f6230] Running
	I0819 10:52:03.052033   17853 system_pods.go:61] "kube-controller-manager-addons-454931" [61aa2aac-e0c0-47f7-9915-afca23cdb2da] Running
	I0819 10:52:03.052036   17853 system_pods.go:61] "kube-ingress-dns-minikube" [8c0f4e82-c7eb-4302-bbfc-b9a95ab55947] Running
	I0819 10:52:03.052039   17853 system_pods.go:61] "kube-proxy-8dmbm" [21b8778a-872e-41ff-89cb-1d6ef217e957] Running
	I0819 10:52:03.052042   17853 system_pods.go:61] "kube-scheduler-addons-454931" [f9f38926-033a-4916-8383-9ae977b6b3d0] Running
	I0819 10:52:03.052045   17853 system_pods.go:61] "metrics-server-8988944d9-w697b" [7c3b07c1-62d8-4b80-b68f-5f7a56a385a4] Running
	I0819 10:52:03.052049   17853 system_pods.go:61] "nvidia-device-plugin-daemonset-4xgtg" [9f3c31d4-b4dd-4fc8-b9c4-1ca0c24775c8] Running
	I0819 10:52:03.052053   17853 system_pods.go:61] "registry-6fb4cdfc84-v7654" [d56000ae-59d9-4ff4-afc3-c173d1aa817f] Running
	I0819 10:52:03.052056   17853 system_pods.go:61] "registry-proxy-sjwlk" [497530f4-1b24-4840-a1d3-6d7174146af0] Running
	I0819 10:52:03.052059   17853 system_pods.go:61] "snapshot-controller-56fcc65765-84zqr" [4cfe5ad2-0a88-4a39-9d55-f4d66d60ea3a] Running
	I0819 10:52:03.052063   17853 system_pods.go:61] "snapshot-controller-56fcc65765-jjwss" [99541df2-d840-480a-8652-8e38b7a53574] Running
	I0819 10:52:03.052066   17853 system_pods.go:61] "storage-provisioner" [b4d4a5ac-4c79-414c-a9e3-960d790962a5] Running
	I0819 10:52:03.052070   17853 system_pods.go:61] "tiller-deploy-b48cc5f79-cdqdx" [e734e815-6d31-40f3-98f0-cc7c3f38ba44] Running
	I0819 10:52:03.052076   17853 system_pods.go:74] duration metric: took 3.399611618s to wait for pod list to return data ...
	I0819 10:52:03.052088   17853 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:52:03.054114   17853 default_sa.go:45] found service account: "default"
	I0819 10:52:03.054135   17853 default_sa.go:55] duration metric: took 2.041965ms for default service account to be created ...
	I0819 10:52:03.054142   17853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:52:03.062236   17853 system_pods.go:86] 20 kube-system pods found
	I0819 10:52:03.062267   17853 system_pods.go:89] "coredns-6f6b679f8f-4lg4p" [68bbaa37-27c9-491f-9299-a9fbb8e3c6aa] Running
	I0819 10:52:03.062273   17853 system_pods.go:89] "coredns-6f6b679f8f-hrnrm" [6622e471-6bfe-4b7f-8472-c5fbc9a7a6aa] Running
	I0819 10:52:03.062278   17853 system_pods.go:89] "csi-hostpath-attacher-0" [55bd1bec-37db-4934-bf73-0fd7d404a31a] Running
	I0819 10:52:03.062283   17853 system_pods.go:89] "csi-hostpath-resizer-0" [bcacc22c-91ac-438f-9425-d9dee1d7f8e4] Running
	I0819 10:52:03.062287   17853 system_pods.go:89] "csi-hostpathplugin-dfmfz" [d62f85fe-9bf5-4f41-9f85-3657f60b6e20] Running
	I0819 10:52:03.062290   17853 system_pods.go:89] "etcd-addons-454931" [5df4cd50-b241-4d2d-8393-b1f5b8fdafc7] Running
	I0819 10:52:03.062293   17853 system_pods.go:89] "kindnet-82zcc" [60e4e9fc-e115-4f32-8217-740dd919dc7d] Running
	I0819 10:52:03.062297   17853 system_pods.go:89] "kube-apiserver-addons-454931" [22bdb559-bd55-4bb9-b545-0d6eec0f6230] Running
	I0819 10:52:03.062301   17853 system_pods.go:89] "kube-controller-manager-addons-454931" [61aa2aac-e0c0-47f7-9915-afca23cdb2da] Running
	I0819 10:52:03.062312   17853 system_pods.go:89] "kube-ingress-dns-minikube" [8c0f4e82-c7eb-4302-bbfc-b9a95ab55947] Running
	I0819 10:52:03.062315   17853 system_pods.go:89] "kube-proxy-8dmbm" [21b8778a-872e-41ff-89cb-1d6ef217e957] Running
	I0819 10:52:03.062320   17853 system_pods.go:89] "kube-scheduler-addons-454931" [f9f38926-033a-4916-8383-9ae977b6b3d0] Running
	I0819 10:52:03.062326   17853 system_pods.go:89] "metrics-server-8988944d9-w697b" [7c3b07c1-62d8-4b80-b68f-5f7a56a385a4] Running
	I0819 10:52:03.062331   17853 system_pods.go:89] "nvidia-device-plugin-daemonset-4xgtg" [9f3c31d4-b4dd-4fc8-b9c4-1ca0c24775c8] Running
	I0819 10:52:03.062335   17853 system_pods.go:89] "registry-6fb4cdfc84-v7654" [d56000ae-59d9-4ff4-afc3-c173d1aa817f] Running
	I0819 10:52:03.062339   17853 system_pods.go:89] "registry-proxy-sjwlk" [497530f4-1b24-4840-a1d3-6d7174146af0] Running
	I0819 10:52:03.062342   17853 system_pods.go:89] "snapshot-controller-56fcc65765-84zqr" [4cfe5ad2-0a88-4a39-9d55-f4d66d60ea3a] Running
	I0819 10:52:03.062355   17853 system_pods.go:89] "snapshot-controller-56fcc65765-jjwss" [99541df2-d840-480a-8652-8e38b7a53574] Running
	I0819 10:52:03.062358   17853 system_pods.go:89] "storage-provisioner" [b4d4a5ac-4c79-414c-a9e3-960d790962a5] Running
	I0819 10:52:03.062361   17853 system_pods.go:89] "tiller-deploy-b48cc5f79-cdqdx" [e734e815-6d31-40f3-98f0-cc7c3f38ba44] Running
	I0819 10:52:03.062368   17853 system_pods.go:126] duration metric: took 8.22126ms to wait for k8s-apps to be running ...
	I0819 10:52:03.062377   17853 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:52:03.062422   17853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:52:03.073756   17853 system_svc.go:56] duration metric: took 11.371549ms WaitForService to wait for kubelet
	I0819 10:52:03.073784   17853 kubeadm.go:582] duration metric: took 2m30.33524262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:52:03.073811   17853 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:52:03.076709   17853 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 10:52:03.076736   17853 node_conditions.go:123] node cpu capacity is 8
	I0819 10:52:03.076753   17853 node_conditions.go:105] duration metric: took 2.936409ms to run NodePressure ...
	I0819 10:52:03.076763   17853 start.go:241] waiting for startup goroutines ...
	I0819 10:52:03.076773   17853 start.go:246] waiting for cluster config update ...
	I0819 10:52:03.076796   17853 start.go:255] writing updated cluster config ...
	I0819 10:52:03.077085   17853 ssh_runner.go:195] Run: rm -f paused
	I0819 10:52:03.127849   17853 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 10:52:03.130499   17853 out.go:177] * Done! kubectl is now configured to use "addons-454931" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.112526643Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-5w8fz from CNI network \"kindnet\" (type=ptp)"
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.159537603Z" level=info msg="Stopped pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=1c79db10-1b39-4a58-a463-38946875d7f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.391125740Z" level=info msg="Removing container: 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066" id=951cde09-7d7c-4307-91f1-fef6b318c963 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:18 addons-454931 crio[1027]: time="2024-08-19 10:55:18.406577801Z" level=info msg="Removed container 77a4e190d16b65dd21cc587bc7159e309f4abb74bd76ebb5a6ac0a6b39675066: ingress-nginx/ingress-nginx-controller-bc57996ff-5w8fz/controller" id=951cde09-7d7c-4307-91f1-fef6b318c963 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.465897752Z" level=info msg="Removing container: 4cc4bc5a717897615e3bb01159cf97ddbcae8cb32f272516feb45c847c4080d7" id=f2bc8d27-893c-404c-afe6-c027e8a7b11b name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.479336915Z" level=info msg="Removed container 4cc4bc5a717897615e3bb01159cf97ddbcae8cb32f272516feb45c847c4080d7: ingress-nginx/ingress-nginx-admission-patch-hz5tk/patch" id=f2bc8d27-893c-404c-afe6-c027e8a7b11b name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.480612789Z" level=info msg="Removing container: a5bf1bddd60ffd8355bd8fe16d2faed67af832e963aef4a5387e82b5d8c2f1c1" id=3844e92a-87e5-4c98-80b0-eea11f655250 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.493699409Z" level=info msg="Removed container a5bf1bddd60ffd8355bd8fe16d2faed67af832e963aef4a5387e82b5d8c2f1c1: ingress-nginx/ingress-nginx-admission-create-gjp2j/create" id=3844e92a-87e5-4c98-80b0-eea11f655250 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.495049431Z" level=info msg="Stopping pod sandbox: 6de7785a3d610ef4cab0a8cd14162aa4c19b1d56cf593c856156c07a5b61e771" id=ec6e1956-1153-4bb0-a2da-378c073b0399 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.495108195Z" level=info msg="Stopped pod sandbox (already stopped): 6de7785a3d610ef4cab0a8cd14162aa4c19b1d56cf593c856156c07a5b61e771" id=ec6e1956-1153-4bb0-a2da-378c073b0399 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.495381416Z" level=info msg="Removing pod sandbox: 6de7785a3d610ef4cab0a8cd14162aa4c19b1d56cf593c856156c07a5b61e771" id=a0bd05b8-3bbf-4fbb-906f-630e98ff1824 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.501817509Z" level=info msg="Removed pod sandbox: 6de7785a3d610ef4cab0a8cd14162aa4c19b1d56cf593c856156c07a5b61e771" id=a0bd05b8-3bbf-4fbb-906f-630e98ff1824 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.502284680Z" level=info msg="Stopping pod sandbox: 3a2c1a5b29fcc302c285d2e7a7c041e17586d162b3228956cb2893a054b5c786" id=f3f75d1d-ebb4-47d4-ad0a-404a0a2e0c85 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.502313020Z" level=info msg="Stopped pod sandbox (already stopped): 3a2c1a5b29fcc302c285d2e7a7c041e17586d162b3228956cb2893a054b5c786" id=f3f75d1d-ebb4-47d4-ad0a-404a0a2e0c85 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.502617345Z" level=info msg="Removing pod sandbox: 3a2c1a5b29fcc302c285d2e7a7c041e17586d162b3228956cb2893a054b5c786" id=affb8228-414d-4d28-9901-1b55aef5a288 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.508288505Z" level=info msg="Removed pod sandbox: 3a2c1a5b29fcc302c285d2e7a7c041e17586d162b3228956cb2893a054b5c786" id=affb8228-414d-4d28-9901-1b55aef5a288 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.508700715Z" level=info msg="Stopping pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=198a1ee4-287b-47bd-9c44-2750530ec44c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.508741697Z" level=info msg="Stopped pod sandbox (already stopped): b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=198a1ee4-287b-47bd-9c44-2750530ec44c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.509060842Z" level=info msg="Removing pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=4590240e-795c-4834-bed0-65ef23bac1c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.514924586Z" level=info msg="Removed pod sandbox: b159d32f52a6b60abaa246515c854ac76ee6e6c684ead4d1957647cc0b86f6bc" id=4590240e-795c-4834-bed0-65ef23bac1c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.515418382Z" level=info msg="Stopping pod sandbox: eac484383a5a36562a6156cafd84cfe2ff332adb3abb55a774081a0837d74c36" id=4e7a8bd7-ec97-4ba5-8281-43e63ca05f21 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.515476857Z" level=info msg="Stopped pod sandbox (already stopped): eac484383a5a36562a6156cafd84cfe2ff332adb3abb55a774081a0837d74c36" id=4e7a8bd7-ec97-4ba5-8281-43e63ca05f21 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.515754545Z" level=info msg="Removing pod sandbox: eac484383a5a36562a6156cafd84cfe2ff332adb3abb55a774081a0837d74c36" id=a04605a8-89c8-40c9-bf39-78538c3d1ac6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:55:28 addons-454931 crio[1027]: time="2024-08-19 10:55:28.521297136Z" level=info msg="Removed pod sandbox: eac484383a5a36562a6156cafd84cfe2ff332adb3abb55a774081a0837d74c36" id=a04605a8-89c8-40c9-bf39-78538c3d1ac6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 10:58:24 addons-454931 crio[1027]: time="2024-08-19 10:58:24.298348984Z" level=info msg="Stopping container: 7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26 (timeout: 30s)" id=bd6bde8d-5760-4777-9b0c-6ca7493159c4 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc6c18b26eee1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   18e8f424ac99c       hello-world-app-55bf9c44b4-9zzxq
	fabd3d00ff447       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   cbd68ca0cc955       nginx
	03a15738c9960       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   d543f674aeef7       busybox
	59c51a6ccc5b7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   13a46ef95d37b       local-path-provisioner-86d989889c-hvnxs
	7be7c5c1959e6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   e67682a0f458b       metrics-server-8988944d9-w697b
	f6b69457461e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   4795ae57d4813       coredns-6f6b679f8f-4lg4p
	d18cf641bcb89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   9a2fb5fa91757       storage-provisioner
	efa219bf4f069       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   88ae1673af4e1       coredns-6f6b679f8f-hrnrm
	a291ab855f115       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      8 minutes ago       Running             kindnet-cni               0                   d478e18ee0139       kindnet-82zcc
	548017acd8f1a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        8 minutes ago       Running             kube-proxy                0                   f41c989262885       kube-proxy-8dmbm
	7d39664256a4d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        9 minutes ago       Running             kube-scheduler            0                   ce4617c6e7341       kube-scheduler-addons-454931
	8e27c625be2e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        9 minutes ago       Running             kube-apiserver            0                   7146a60e9c386       kube-apiserver-addons-454931
	cc5123d3ccb34       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        9 minutes ago       Running             kube-controller-manager   0                   e7a495b2a54ad       kube-controller-manager-addons-454931
	5aa227674dce3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   7acfce8976602       etcd-addons-454931
	
	
	==> coredns [efa219bf4f0691a53d5267f2849bea5346e24dd972e9ec60342f16521fe772cb] <==
	[INFO] 10.244.0.7:41958 - 33101 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077428s
	[INFO] 10.244.0.7:54471 - 52001 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089237s
	[INFO] 10.244.0.7:54471 - 42796 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001511s
	[INFO] 10.244.0.7:52201 - 59800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006429s
	[INFO] 10.244.0.7:52201 - 20380 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097268s
	[INFO] 10.244.0.7:54121 - 58433 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003850892s
	[INFO] 10.244.0.7:54121 - 59725 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005563286s
	[INFO] 10.244.0.7:52094 - 25866 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003271509s
	[INFO] 10.244.0.7:52094 - 29198 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003863361s
	[INFO] 10.244.0.7:56623 - 22681 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081281s
	[INFO] 10.244.0.7:56623 - 52634 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064844s
	[INFO] 10.244.0.7:60412 - 17149 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000442s
	[INFO] 10.244.0.7:60412 - 13025 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053967s
	[INFO] 10.244.0.7:42848 - 16461 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003665777s
	[INFO] 10.244.0.7:42848 - 49742 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003688822s
	[INFO] 10.244.0.7:40232 - 18365 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004089831s
	[INFO] 10.244.0.7:40232 - 50617 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004200287s
	[INFO] 10.244.0.7:57769 - 32646 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.002882573s
	[INFO] 10.244.0.7:57769 - 11403 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003776917s
	[INFO] 10.244.0.22:54533 - 64803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189906s
	[INFO] 10.244.0.22:39407 - 14247 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014852s
	[INFO] 10.244.0.22:33803 - 18565 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007448023s
	[INFO] 10.244.0.22:36783 - 15237 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00619233s
	[INFO] 10.244.0.22:52466 - 4793 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006281089s
	[INFO] 10.244.0.22:55602 - 1488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000815424s
	
	
	==> coredns [f6b69457461e9f416f747747d0f782733c7404dcdfac764ad764e7064665a63b] <==
	[INFO] 10.244.0.7:51052 - 63765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126615s
	[INFO] 10.244.0.7:42284 - 24012 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003903686s
	[INFO] 10.244.0.7:42284 - 13775 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005355048s
	[INFO] 10.244.0.7:60296 - 37705 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003218804s
	[INFO] 10.244.0.7:60296 - 32589 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004250666s
	[INFO] 10.244.0.7:46083 - 61566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110167s
	[INFO] 10.244.0.7:46083 - 28536 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000139858s
	[INFO] 10.244.0.7:39968 - 43585 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004197253s
	[INFO] 10.244.0.7:39968 - 4164 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004436842s
	[INFO] 10.244.0.7:37474 - 54656 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083632s
	[INFO] 10.244.0.7:37474 - 25229 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105475s
	[INFO] 10.244.0.7:44404 - 31906 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157019s
	[INFO] 10.244.0.7:44404 - 25775 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018275s
	[INFO] 10.244.0.7:46216 - 25519 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000065258s
	[INFO] 10.244.0.7:46216 - 57011 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106283s
	[INFO] 10.244.0.22:60011 - 27675 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184004s
	[INFO] 10.244.0.22:58506 - 17004 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011187s
	[INFO] 10.244.0.22:44021 - 3426 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122507s
	[INFO] 10.244.0.22:39021 - 15645 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140122s
	[INFO] 10.244.0.22:46275 - 60929 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007999212s
	[INFO] 10.244.0.22:41570 - 62774 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004459121s
	[INFO] 10.244.0.22:34100 - 60625 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005328444s
	[INFO] 10.244.0.22:57989 - 60766 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000946131s
	[INFO] 10.244.0.27:49374 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191128s
	[INFO] 10.244.0.27:52591 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134612s
	
	
	==> describe nodes <==
	Name:               addons-454931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-454931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=addons-454931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_49_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-454931
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 10:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-454931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 10:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 10:55:35 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 10:55:35 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 10:55:35 +0000   Mon, 19 Aug 2024 10:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 10:55:35 +0000   Mon, 19 Aug 2024 10:49:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-454931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 960f55d6b6854585920b92aaf22992e8
	  System UUID:                1e7e9fae-fade-4d33-903a-36d9e09706d1
	  Boot ID:                    7f72e4de-82e3-4ac1-af0c-a667ff710ce9
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  default                     hello-world-app-55bf9c44b4-9zzxq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 coredns-6f6b679f8f-4lg4p                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m51s
	  kube-system                 coredns-6f6b679f8f-hrnrm                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m52s
	  kube-system                 etcd-addons-454931                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m57s
	  kube-system                 kindnet-82zcc                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m53s
	  kube-system                 kube-apiserver-addons-454931               250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-controller-manager-addons-454931      200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-proxy-8dmbm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m53s
	  kube-system                 kube-scheduler-addons-454931               100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 metrics-server-8988944d9-w697b             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m48s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  local-path-storage          local-path-provisioner-86d989889c-hvnxs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             490Mi (1%)   390Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m47s                kube-proxy       
	  Normal   NodeHasSufficientMemory  9m3s (x8 over 9m3s)  kubelet          Node addons-454931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m3s (x8 over 9m3s)  kubelet          Node addons-454931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m3s (x7 over 9m3s)  kubelet          Node addons-454931 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m57s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m57s                kubelet          Node addons-454931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m57s                kubelet          Node addons-454931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m57s                kubelet          Node addons-454931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m53s                node-controller  Node addons-454931 event: Registered Node addons-454931 in Controller
	  Normal   NodeReady                8m34s                kubelet          Node addons-454931 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001354] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001371] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001459] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001282] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.572178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.045427] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006439] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014067] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002552] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013877] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.453746] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 10:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +1.007780] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +2.011814] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +4.063599] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[  +8.191202] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[ +16.126423] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	[Aug19 10:54] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba eb 3d db 32 39 92 24 91 09 8a a1 08 00
	
	
	==> etcd [5aa227674dce361724174026c8a0ea1cf2334d688e7db0f087b365e61b4dc933] <==
	{"level":"info","ts":"2024-08-19T10:49:34.373000Z","caller":"traceutil/trace.go:171","msg":"trace[403933077] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"100.302724ms","start":"2024-08-19T10:49:34.272681Z","end":"2024-08-19T10:49:34.372983Z","steps":["trace[403933077] 'process raft request'  (duration: 100.198334ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.554228Z","caller":"traceutil/trace.go:171","msg":"trace[839491864] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"188.931632ms","start":"2024-08-19T10:49:34.365273Z","end":"2024-08-19T10:49:34.554205Z","steps":["trace[839491864] 'process raft request'  (duration: 92.863023ms)","trace[839491864] 'compare'  (duration: 95.679414ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:34.857777Z","caller":"traceutil/trace.go:171","msg":"trace[488263906] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"183.060142ms","start":"2024-08-19T10:49:34.673348Z","end":"2024-08-19T10:49:34.856408Z","steps":["trace[488263906] 'process raft request'  (duration: 182.828447ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.960620Z","caller":"traceutil/trace.go:171","msg":"trace[1361221828] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:396; }","duration":"184.126501ms","start":"2024-08-19T10:49:34.776477Z","end":"2024-08-19T10:49:34.960603Z","steps":["trace[1361221828] 'read index received'  (duration: 79.657019ms)","trace[1361221828] 'applied index is now lower than readState.Index'  (duration: 104.468828ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T10:49:34.960763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.265901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T10:49:34.960796Z","caller":"traceutil/trace.go:171","msg":"trace[378239969] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:389; }","duration":"184.312374ms","start":"2024-08-19T10:49:34.776473Z","end":"2024-08-19T10:49:34.960786Z","steps":["trace[378239969] 'agreement among raft nodes before linearized reading'  (duration: 184.207526ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961044Z","caller":"traceutil/trace.go:171","msg":"trace[760802916] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"287.173248ms","start":"2024-08-19T10:49:34.673845Z","end":"2024-08-19T10:49:34.961019Z","steps":["trace[760802916] 'process raft request'  (duration: 195.844543ms)","trace[760802916] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4016; } (duration: 90.366506ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:34.961221Z","caller":"traceutil/trace.go:171","msg":"trace[1706789155] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"285.707791ms","start":"2024-08-19T10:49:34.675490Z","end":"2024-08-19T10:49:34.961213Z","steps":["trace[1706789155] 'process raft request'  (duration: 284.917608ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961320Z","caller":"traceutil/trace.go:171","msg":"trace[1516592545] transaction","detail":"{read_only:false; number_of_response:1; response_revision:388; }","duration":"200.300701ms","start":"2024-08-19T10:49:34.761013Z","end":"2024-08-19T10:49:34.961314Z","steps":["trace[1516592545] 'process raft request'  (duration: 199.471609ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:34.961407Z","caller":"traceutil/trace.go:171","msg":"trace[780529673] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"185.142231ms","start":"2024-08-19T10:49:34.776253Z","end":"2024-08-19T10:49:34.961396Z","steps":["trace[780529673] 'process raft request'  (duration: 184.281163ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754190Z","caller":"traceutil/trace.go:171","msg":"trace[451347578] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"189.192755ms","start":"2024-08-19T10:49:35.564977Z","end":"2024-08-19T10:49:35.754170Z","steps":["trace[451347578] 'process raft request'  (duration: 97.341338ms)","trace[451347578] 'compare'  (duration: 91.585712ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T10:49:35.754514Z","caller":"traceutil/trace.go:171","msg":"trace[2127898163] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:412; }","duration":"184.70333ms","start":"2024-08-19T10:49:35.569797Z","end":"2024-08-19T10:49:35.754500Z","steps":["trace[2127898163] 'read index received'  (duration: 92.531543ms)","trace[2127898163] 'applied index is now lower than readState.Index'  (duration: 92.170961ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T10:49:35.754606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.791695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T10:49:35.756684Z","caller":"traceutil/trace.go:171","msg":"trace[1479245646] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:406; }","duration":"186.87491ms","start":"2024-08-19T10:49:35.569792Z","end":"2024-08-19T10:49:35.756667Z","steps":["trace[1479245646] 'agreement among raft nodes before linearized reading'  (duration: 184.751451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:35.756750Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.894706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-454931\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-08-19T10:49:35.754635Z","caller":"traceutil/trace.go:171","msg":"trace[1699164827] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"184.604571ms","start":"2024-08-19T10:49:35.570021Z","end":"2024-08-19T10:49:35.754626Z","steps":["trace[1699164827] 'process raft request'  (duration: 184.122714ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754675Z","caller":"traceutil/trace.go:171","msg":"trace[1702706134] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"100.310775ms","start":"2024-08-19T10:49:35.654353Z","end":"2024-08-19T10:49:35.754664Z","steps":["trace[1702706134] 'process raft request'  (duration: 99.905567ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.754768Z","caller":"traceutil/trace.go:171","msg":"trace[675679215] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"100.505413ms","start":"2024-08-19T10:49:35.654254Z","end":"2024-08-19T10:49:35.754759Z","steps":["trace[675679215] 'process raft request'  (duration: 99.976845ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:35.757920Z","caller":"traceutil/trace.go:171","msg":"trace[40001997] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-454931; range_end:; response_count:1; response_revision:406; }","duration":"188.06874ms","start":"2024-08-19T10:49:35.569835Z","end":"2024-08-19T10:49:35.757904Z","steps":["trace[40001997] 'agreement among raft nodes before linearized reading'  (duration: 186.855991ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:49:36.656939Z","caller":"traceutil/trace.go:171","msg":"trace[2109661097] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"186.960435ms","start":"2024-08-19T10:49:36.469964Z","end":"2024-08-19T10:49:36.656925Z","steps":["trace[2109661097] 'process raft request'  (duration: 186.922523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:36.657528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.860907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2024-08-19T10:49:36.657826Z","caller":"traceutil/trace.go:171","msg":"trace[1293447703] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:450; }","duration":"101.164818ms","start":"2024-08-19T10:49:36.556647Z","end":"2024-08-19T10:49:36.657812Z","steps":["trace[1293447703] 'agreement among raft nodes before linearized reading'  (duration: 100.826319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T10:49:37.272942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.01887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-08-19T10:49:37.273084Z","caller":"traceutil/trace.go:171","msg":"trace[304421460] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:507; }","duration":"103.165352ms","start":"2024-08-19T10:49:37.169903Z","end":"2024-08-19T10:49:37.273069Z","steps":["trace[304421460] 'agreement among raft nodes before linearized reading'  (duration: 102.991635ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T10:50:47.808016Z","caller":"traceutil/trace.go:171","msg":"trace[619413792] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"110.038435ms","start":"2024-08-19T10:50:47.697956Z","end":"2024-08-19T10:50:47.807995Z","steps":["trace[619413792] 'process raft request'  (duration: 43.068904ms)","trace[619413792] 'compare'  (duration: 66.883694ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:58:25 up 40 min,  0 users,  load average: 0.04, 0.23, 0.20
	Linux addons-454931 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [a291ab855f115c38d50a242f470844271cc15d6b4a6415a2256a82bc4761595a] <==
	I0819 10:57:11.754691       1 main.go:299] handling current node
	W0819 10:57:19.270301       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 10:57:19.270335       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 10:57:21.754504       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:57:21.754547       1 main.go:299] handling current node
	W0819 10:57:28.654244       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 10:57:28.654282       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 10:57:31.708688       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 10:57:31.708721       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 10:57:31.754847       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:57:31.754887       1 main.go:299] handling current node
	I0819 10:57:41.755228       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:57:41.755278       1 main.go:299] handling current node
	I0819 10:57:51.755103       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:57:51.755142       1 main.go:299] handling current node
	W0819 10:58:01.280954       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 10:58:01.280985       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 10:58:01.754283       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:58:01.754324       1 main.go:299] handling current node
	I0819 10:58:11.755128       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:58:11.755165       1 main.go:299] handling current node
	W0819 10:58:18.025827       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 10:58:18.025869       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 10:58:21.754606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 10:58:21.754648       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8e27c625be2e5c47c4c554fe2aba32321eba34cf34ee581ac879194dcee62b58] <==
	I0819 10:51:52.616521       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 10:52:12.561045       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48696: use of closed network connection
	E0819 10:52:12.778823       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48716: use of closed network connection
	I0819 10:52:37.512890       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 10:52:38.047013       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0819 10:52:47.189847       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.29:35624: read: connection reset by peer
	I0819 10:52:48.150453       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 10:52:49.168051       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 10:52:49.873716       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 10:52:50.036906       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.247.106"}
	I0819 10:52:54.800483       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.73.172"}
	I0819 10:53:10.430472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.430525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.443973       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.444117       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.445463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.445500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.454880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.455020       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 10:53:10.465966       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 10:53:10.466081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 10:53:11.445608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 10:53:11.466925       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 10:53:11.568764       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 10:55:12.864345       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.87.86"}
	
	
	==> kube-controller-manager [cc5123d3ccb34df5aeeed4f851f5aee34f31fd171451f9c676e54152d87b288f] <==
	W0819 10:56:33.255335       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:56:33.255382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:56:34.747901       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:56:34.747947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:56:50.407719       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:56:50.407762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:09.303988       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:09.304039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:09.617034       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:09.617078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:26.431645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:26.431682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:30.884937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:30.884978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:45.118725       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:45.118776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:57:52.097550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:57:52.097593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:58:11.302127       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:58:11.302169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:58:17.349475       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:58:17.349526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 10:58:18.185116       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 10:58:18.185169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 10:58:24.287231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.914µs"
	
	
	==> kube-proxy [548017acd8f1a56c38fd283ae52b35444913a48cb008849ea7beedf32999f2c5] <==
	I0819 10:49:36.266759       1 server_linux.go:66] "Using iptables proxy"
	I0819 10:49:37.175542       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 10:49:37.175719       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 10:49:37.558077       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 10:49:37.558203       1 server_linux.go:169] "Using iptables Proxier"
	I0819 10:49:37.568736       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 10:49:37.572784       1 server.go:483] "Version info" version="v1.31.0"
	I0819 10:49:37.573063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 10:49:37.759310       1 config.go:197] "Starting service config controller"
	I0819 10:49:37.763009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 10:49:37.762332       1 config.go:326] "Starting node config controller"
	I0819 10:49:37.763162       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 10:49:37.762364       1 config.go:104] "Starting endpoint slice config controller"
	I0819 10:49:37.763216       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 10:49:37.863577       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 10:49:37.863616       1 shared_informer.go:320] Caches are synced for service config
	I0819 10:49:37.863748       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d39664256a4d3ba4557123ef31052dad647643e97a23a78ed323a868076a590] <==
	W0819 10:49:25.583348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 10:49:25.583569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:25.583359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:25.583603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:25.583420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 10:49:25.583632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.484862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 10:49:26.484899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.505185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 10:49:26.505234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.584095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:26.584137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.621657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 10:49:26.621705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.628971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 10:49:26.629012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.697928       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 10:49:26.697971       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 10:49:26.712491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 10:49:26.712528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.714528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 10:49:26.714567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 10:49:26.732943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 10:49:26.732986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 10:49:29.882301       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 10:57:28 addons-454931 kubelet[1625]: E0819 10:57:28.435111    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065048434900704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:28 addons-454931 kubelet[1625]: E0819 10:57:28.435143    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065048434900704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:38 addons-454931 kubelet[1625]: E0819 10:57:38.437540    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065058437252082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:38 addons-454931 kubelet[1625]: E0819 10:57:38.437585    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065058437252082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:48 addons-454931 kubelet[1625]: E0819 10:57:48.439970    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065068439682938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:48 addons-454931 kubelet[1625]: E0819 10:57:48.440017    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065068439682938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:52 addons-454931 kubelet[1625]: I0819 10:57:52.164825    1625 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-6f6b679f8f-hrnrm" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 10:57:58 addons-454931 kubelet[1625]: E0819 10:57:58.443034    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065078442788747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:57:58 addons-454931 kubelet[1625]: E0819 10:57:58.443078    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065078442788747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:58:08 addons-454931 kubelet[1625]: E0819 10:58:08.445677    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065088445423741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:58:08 addons-454931 kubelet[1625]: E0819 10:58:08.445719    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065088445423741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:58:18 addons-454931 kubelet[1625]: E0819 10:58:18.448524    1625 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065098448286694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:58:18 addons-454931 kubelet[1625]: E0819 10:58:18.448567    1625 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724065098448286694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616563,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 10:58:24 addons-454931 kubelet[1625]: I0819 10:58:24.164452    1625 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 10:58:24 addons-454931 kubelet[1625]: I0819 10:58:24.297465    1625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-9zzxq" podStartSLOduration=188.523734035 podStartE2EDuration="3m12.297440264s" podCreationTimestamp="2024-08-19 10:55:12 +0000 UTC" firstStartedPulling="2024-08-19 10:55:13.278106772 +0000 UTC m=+345.218450357" lastFinishedPulling="2024-08-19 10:55:17.051812989 +0000 UTC m=+348.992156586" observedRunningTime="2024-08-19 10:55:17.398790054 +0000 UTC m=+349.339133657" watchObservedRunningTime="2024-08-19 10:58:24.297440264 +0000 UTC m=+536.237783867"
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.703552    1625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mkq\" (UniqueName: \"kubernetes.io/projected/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-kube-api-access-88mkq\") pod \"7c3b07c1-62d8-4b80-b68f-5f7a56a385a4\" (UID: \"7c3b07c1-62d8-4b80-b68f-5f7a56a385a4\") "
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.703645    1625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-tmp-dir\") pod \"7c3b07c1-62d8-4b80-b68f-5f7a56a385a4\" (UID: \"7c3b07c1-62d8-4b80-b68f-5f7a56a385a4\") "
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.703935    1625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "7c3b07c1-62d8-4b80-b68f-5f7a56a385a4" (UID: "7c3b07c1-62d8-4b80-b68f-5f7a56a385a4"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.705244    1625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-kube-api-access-88mkq" (OuterVolumeSpecName: "kube-api-access-88mkq") pod "7c3b07c1-62d8-4b80-b68f-5f7a56a385a4" (UID: "7c3b07c1-62d8-4b80-b68f-5f7a56a385a4"). InnerVolumeSpecName "kube-api-access-88mkq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.743205    1625 scope.go:117] "RemoveContainer" containerID="7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26"
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.758967    1625 scope.go:117] "RemoveContainer" containerID="7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26"
	Aug 19 10:58:25 addons-454931 kubelet[1625]: E0819 10:58:25.759332    1625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26\": container with ID starting with 7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26 not found: ID does not exist" containerID="7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26"
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.759376    1625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26"} err="failed to get container status \"7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26\": rpc error: code = NotFound desc = could not find container \"7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26\": container with ID starting with 7be7c5c1959e61dc87b58cd7d3eb7eed2e6821ef596b3978b6e21fbdb71b1e26 not found: ID does not exist"
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.804660    1625 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-88mkq\" (UniqueName: \"kubernetes.io/projected/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-kube-api-access-88mkq\") on node \"addons-454931\" DevicePath \"\""
	Aug 19 10:58:25 addons-454931 kubelet[1625]: I0819 10:58:25.804699    1625 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c3b07c1-62d8-4b80-b68f-5f7a56a385a4-tmp-dir\") on node \"addons-454931\" DevicePath \"\""
	
	
	==> storage-provisioner [d18cf641bcb894f80055948d4b524f525fef195a0f0db22c91cca43266b781de] <==
	I0819 10:49:52.961924       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 10:49:52.971714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 10:49:52.971776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 10:49:52.983906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 10:49:52.984062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f!
	I0819 10:49:52.984083       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"062087cb-c6cc-4539-9bb4-d3dfe225f675", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f became leader
	I0819 10:49:53.085004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-454931_c463ac3e-4f1b-4dd5-8445-2155b982069f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-454931 -n addons-454931
helpers_test.go:261: (dbg) Run:  kubectl --context addons-454931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (349.02s)

                                                
                                    

Test pass (301/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 38.26
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.69
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.77
22 TestOffline 58.05
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 193.02
31 TestAddons/serial/GCPAuth/Namespaces 0.13
33 TestAddons/parallel/Registry 17.68
35 TestAddons/parallel/InspektorGadget 11.18
37 TestAddons/parallel/HelmTiller 10.81
39 TestAddons/parallel/CSI 49.61
40 TestAddons/parallel/Headlamp 18.61
41 TestAddons/parallel/CloudSpanner 5.47
42 TestAddons/parallel/LocalPath 15.15
43 TestAddons/parallel/NvidiaDevicePlugin 6.44
44 TestAddons/parallel/Yakd 10.74
45 TestAddons/StoppedEnableDisable 12.06
46 TestCertOptions 24.9
47 TestCertExpiration 220.98
49 TestForceSystemdFlag 31.66
50 TestForceSystemdEnv 36.21
52 TestKVMDriverInstallOrUpdate 4.5
56 TestErrorSpam/setup 23.38
57 TestErrorSpam/start 0.58
58 TestErrorSpam/status 0.86
59 TestErrorSpam/pause 1.5
60 TestErrorSpam/unpause 1.65
61 TestErrorSpam/stop 1.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.64
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.76
73 TestFunctional/serial/CacheCmd/cache/add_local 2
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 50.13
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.34
84 TestFunctional/serial/LogsFileCmd 1.36
85 TestFunctional/serial/InvalidService 3.73
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 13.72
89 TestFunctional/parallel/DryRun 0.4
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.09
95 TestFunctional/parallel/ServiceCmdConnect 8.83
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 38.25
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 2.02
101 TestFunctional/parallel/MySQL 27.65
102 TestFunctional/parallel/FileSync 0.26
103 TestFunctional/parallel/CertSync 2.1
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.74
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.06
120 TestFunctional/parallel/ImageCommands/Setup 1.77
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.91
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
132 TestFunctional/parallel/ProfileCmd/profile_list 0.34
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
134 TestFunctional/parallel/ServiceCmd/List 0.89
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.91
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
137 TestFunctional/parallel/ServiceCmd/Format 0.35
138 TestFunctional/parallel/ServiceCmd/URL 0.33
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.2
144 TestFunctional/parallel/MountCmd/any-port 15.69
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/MountCmd/specific-port 1.62
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 105.53
160 TestMultiControlPlane/serial/DeployApp 6.06
161 TestMultiControlPlane/serial/PingHostFromPods 1.02
162 TestMultiControlPlane/serial/AddWorkerNode 36.36
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
165 TestMultiControlPlane/serial/CopyFile 15.71
166 TestMultiControlPlane/serial/StopSecondaryNode 12.55
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.52
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.53
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.95
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.33
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
173 TestMultiControlPlane/serial/StopCluster 35.55
174 TestMultiControlPlane/serial/RestartCluster 58.11
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
176 TestMultiControlPlane/serial/AddSecondaryNode 51
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
181 TestJSONOutput/start/Command 45.07
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.68
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.68
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 37.07
207 TestKicCustomNetwork/use_default_bridge_network 23.41
208 TestKicExistingNetwork 26.1
209 TestKicCustomSubnet 27.11
210 TestKicStaticIP 23.69
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 51
215 TestMountStart/serial/StartWithMountFirst 8.59
216 TestMountStart/serial/VerifyMountFirst 0.24
217 TestMountStart/serial/StartWithMountSecond 8.92
218 TestMountStart/serial/VerifyMountSecond 0.23
219 TestMountStart/serial/DeleteFirst 1.6
220 TestMountStart/serial/VerifyMountPostDelete 0.24
221 TestMountStart/serial/Stop 1.17
222 TestMountStart/serial/RestartStopped 7.69
223 TestMountStart/serial/VerifyMountPostStop 0.24
226 TestMultiNode/serial/FreshStart2Nodes 73.47
227 TestMultiNode/serial/DeployApp2Nodes 4.54
228 TestMultiNode/serial/PingHostFrom2Pods 0.7
229 TestMultiNode/serial/AddNode 29.56
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.28
232 TestMultiNode/serial/CopyFile 8.85
233 TestMultiNode/serial/StopNode 2.07
234 TestMultiNode/serial/StartAfterStop 8.95
235 TestMultiNode/serial/RestartKeepsNodes 107.75
236 TestMultiNode/serial/DeleteNode 5.26
237 TestMultiNode/serial/StopMultiNode 23.79
238 TestMultiNode/serial/RestartMultiNode 45.59
239 TestMultiNode/serial/ValidateNameConflict 25.98
244 TestPreload 139.34
246 TestScheduledStopUnix 97.02
249 TestInsufficientStorage 10.06
250 TestRunningBinaryUpgrade 90.89
252 TestKubernetesUpgrade 340.01
253 TestMissingContainerUpgrade 154.4
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestStoppedBinaryUpgrade/Setup 2.35
257 TestNoKubernetes/serial/StartWithK8s 29.35
258 TestStoppedBinaryUpgrade/Upgrade 153.78
259 TestNoKubernetes/serial/StartWithStopK8s 8.89
260 TestNoKubernetes/serial/Start 8.18
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
262 TestNoKubernetes/serial/ProfileList 0.86
263 TestNoKubernetes/serial/Stop 1.18
264 TestNoKubernetes/serial/StartNoArgs 6.96
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
273 TestStoppedBinaryUpgrade/MinikubeLogs 2.19
275 TestPause/serial/Start 43.58
283 TestNetworkPlugins/group/false 3.1
288 TestStartStop/group/old-k8s-version/serial/FirstStart 112.01
289 TestPause/serial/SecondStartNoReconfiguration 34.01
290 TestPause/serial/Pause 0.72
291 TestPause/serial/VerifyStatus 0.3
292 TestPause/serial/Unpause 0.63
293 TestPause/serial/PauseAgain 0.84
294 TestPause/serial/DeletePaused 2.81
295 TestPause/serial/VerifyDeletedResources 0.72
297 TestStartStop/group/no-preload/serial/FirstStart 80.94
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
300 TestStartStop/group/old-k8s-version/serial/Stop 12.04
301 TestStartStop/group/no-preload/serial/DeployApp 10.27
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
303 TestStartStop/group/old-k8s-version/serial/SecondStart 121.65
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
305 TestStartStop/group/no-preload/serial/Stop 12.11
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.26
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 286.12
311 TestStartStop/group/newest-cni/serial/FirstStart 28.64
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
315 TestStartStop/group/newest-cni/serial/Stop 2
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
317 TestStartStop/group/newest-cni/serial/SecondStart 12.81
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.89
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
323 TestStartStop/group/newest-cni/serial/Pause 2.55
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 302.37
327 TestStartStop/group/embed-certs/serial/FirstStart 43.59
328 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/embed-certs/serial/DeployApp 10.25
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
331 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
332 TestStartStop/group/old-k8s-version/serial/Pause 2.63
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
334 TestStartStop/group/embed-certs/serial/Stop 11.93
335 TestNetworkPlugins/group/auto/Start 43.15
336 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
337 TestStartStop/group/embed-certs/serial/SecondStart 263.95
338 TestNetworkPlugins/group/auto/KubeletFlags 0.26
339 TestNetworkPlugins/group/auto/NetCatPod 10.19
340 TestNetworkPlugins/group/auto/DNS 0.13
341 TestNetworkPlugins/group/auto/Localhost 0.11
342 TestNetworkPlugins/group/auto/HairPin 0.11
343 TestNetworkPlugins/group/kindnet/Start 43.54
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
346 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
347 TestNetworkPlugins/group/kindnet/DNS 0.13
348 TestNetworkPlugins/group/kindnet/Localhost 0.11
349 TestNetworkPlugins/group/kindnet/HairPin 0.11
350 TestNetworkPlugins/group/calico/Start 60.85
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
354 TestStartStop/group/no-preload/serial/Pause 2.72
355 TestNetworkPlugins/group/custom-flannel/Start 51.35
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.27
358 TestNetworkPlugins/group/calico/NetCatPod 11.21
359 TestNetworkPlugins/group/calico/DNS 0.13
360 TestNetworkPlugins/group/calico/Localhost 0.1
361 TestNetworkPlugins/group/calico/HairPin 0.1
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.18
364 TestNetworkPlugins/group/custom-flannel/DNS 0.17
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
368 TestNetworkPlugins/group/enable-default-cni/Start 38.02
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
372 TestNetworkPlugins/group/flannel/Start 54.54
373 TestNetworkPlugins/group/bridge/Start 70.34
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestStartStop/group/embed-certs/serial/Pause 3.37
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
385 TestNetworkPlugins/group/flannel/NetCatPod 10.18
386 TestNetworkPlugins/group/flannel/DNS 0.13
387 TestNetworkPlugins/group/flannel/Localhost 0.1
388 TestNetworkPlugins/group/flannel/HairPin 0.11
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
390 TestNetworkPlugins/group/bridge/NetCatPod 11.17
391 TestNetworkPlugins/group/bridge/DNS 0.15
392 TestNetworkPlugins/group/bridge/Localhost 0.11
393 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (38.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-626075 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-626075 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.257123317s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (38.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-626075
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-626075: exit status 85 (61.023488ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-626075 | jenkins | v1.33.1 | 19 Aug 24 10:47 UTC |          |
	|         | -p download-only-626075        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:47:55
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:47:55.282312   16424 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:47:55.282413   16424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:55.282421   16424 out.go:358] Setting ErrFile to fd 2...
	I0819 10:47:55.282425   16424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:55.282602   16424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	W0819 10:47:55.282730   16424 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19476-9624/.minikube/config/config.json: open /home/jenkins/minikube-integration/19476-9624/.minikube/config/config.json: no such file or directory
	I0819 10:47:55.283284   16424 out.go:352] Setting JSON to true
	I0819 10:47:55.284192   16424 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1815,"bootTime":1724062660,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:47:55.284254   16424 start.go:139] virtualization: kvm guest
	I0819 10:47:55.286623   16424 out.go:97] [download-only-626075] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 10:47:55.286756   16424 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 10:47:55.286759   16424 notify.go:220] Checking for updates...
	I0819 10:47:55.288263   16424 out.go:169] MINIKUBE_LOCATION=19476
	I0819 10:47:55.289604   16424 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:47:55.290728   16424 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:47:55.291909   16424 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 10:47:55.293106   16424 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 10:47:55.295265   16424 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 10:47:55.295472   16424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:47:55.317838   16424 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 10:47:55.317996   16424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:47:55.665810   16424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 10:47:55.656560257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:47:55.665957   16424 docker.go:307] overlay module found
	I0819 10:47:55.667727   16424 out.go:97] Using the docker driver based on user configuration
	I0819 10:47:55.667756   16424 start.go:297] selected driver: docker
	I0819 10:47:55.667770   16424 start.go:901] validating driver "docker" against <nil>
	I0819 10:47:55.667875   16424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:47:55.715072   16424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 10:47:55.705695548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:47:55.715231   16424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:47:55.715867   16424 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 10:47:55.716021   16424 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 10:47:55.717702   16424 out.go:169] Using Docker driver with root privileges
	I0819 10:47:55.718774   16424 cni.go:84] Creating CNI manager for ""
	I0819 10:47:55.718790   16424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:47:55.718800   16424 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:47:55.718865   16424 start.go:340] cluster config:
	{Name:download-only-626075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-626075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:55.720055   16424 out.go:97] Starting "download-only-626075" primary control-plane node in "download-only-626075" cluster
	I0819 10:47:55.720073   16424 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 10:47:55.721189   16424 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 10:47:55.721213   16424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 10:47:55.721312   16424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 10:47:55.737245   16424 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 10:47:55.737441   16424 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 10:47:55.737536   16424 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 10:47:55.853665   16424 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:47:55.853692   16424 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:55.853838   16424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 10:47:55.855611   16424 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 10:47:55.855632   16424 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:47:55.951995   16424 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:48:07.512741   16424 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:48:07.512827   16424 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:48:08.432429   16424 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 10:48:08.432740   16424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/download-only-626075/config.json ...
	I0819 10:48:08.432766   16424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/download-only-626075/config.json: {Name:mk6bbd4c19b153d08e3cba656e6d64341b41d54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:08.432920   16424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 10:48:08.433087   16424 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19476-9624/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0819 10:48:16.021037   16424 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-626075 host does not exist
	  To start a cluster, run: "minikube start -p download-only-626075"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-626075
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-867810 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-867810 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.688032423s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-867810
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-867810: exit status 85 (57.613921ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-626075 | jenkins | v1.33.1 | 19 Aug 24 10:47 UTC |                     |
	|         | -p download-only-626075        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| delete  | -p download-only-626075        | download-only-626075 | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC | 19 Aug 24 10:48 UTC |
	| start   | -o=json --download-only        | download-only-867810 | jenkins | v1.33.1 | 19 Aug 24 10:48 UTC |                     |
	|         | -p download-only-867810        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:48:33
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:48:33.932759   16866 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:48:33.932898   16866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:33.932908   16866 out.go:358] Setting ErrFile to fd 2...
	I0819 10:48:33.932914   16866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:48:33.933088   16866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 10:48:33.933772   16866 out.go:352] Setting JSON to true
	I0819 10:48:33.934615   16866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1854,"bootTime":1724062660,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:48:33.934672   16866 start.go:139] virtualization: kvm guest
	I0819 10:48:33.936702   16866 out.go:97] [download-only-867810] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 10:48:33.936890   16866 notify.go:220] Checking for updates...
	I0819 10:48:33.938279   16866 out.go:169] MINIKUBE_LOCATION=19476
	I0819 10:48:33.939373   16866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:48:33.940708   16866 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 10:48:33.941877   16866 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 10:48:33.943021   16866 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 10:48:33.945620   16866 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 10:48:33.945848   16866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:48:33.968289   16866 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 10:48:33.968383   16866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:34.013512   16866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:34.004532002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:34.013609   16866 docker.go:307] overlay module found
	I0819 10:48:34.015099   16866 out.go:97] Using the docker driver based on user configuration
	I0819 10:48:34.015123   16866 start.go:297] selected driver: docker
	I0819 10:48:34.015134   16866 start.go:901] validating driver "docker" against <nil>
	I0819 10:48:34.015224   16866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 10:48:34.063754   16866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 10:48:34.054654157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 10:48:34.063958   16866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:48:34.064442   16866 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 10:48:34.064599   16866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 10:48:34.066320   16866 out.go:169] Using Docker driver with root privileges
	I0819 10:48:34.067416   16866 cni.go:84] Creating CNI manager for ""
	I0819 10:48:34.067435   16866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 10:48:34.067487   16866 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:48:34.067596   16866 start.go:340] cluster config:
	{Name:download-only-867810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-867810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:48:34.068926   16866 out.go:97] Starting "download-only-867810" primary control-plane node in "download-only-867810" cluster
	I0819 10:48:34.068945   16866 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 10:48:34.069974   16866 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 10:48:34.069997   16866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:48:34.070127   16866 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 10:48:34.086166   16866 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 10:48:34.086335   16866 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 10:48:34.086354   16866 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 10:48:34.086359   16866 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 10:48:34.086368   16866 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 10:48:34.493095   16866 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:48:34.493130   16866 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:34.493267   16866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:48:34.495035   16866 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 10:48:34.495059   16866 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:48:34.593841   16866 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19476-9624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-867810 host does not exist
	  To start a cluster, run: "minikube start -p download-only-867810"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-867810
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-492817 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-492817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-492817
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-843469 --alsologtostderr --binary-mirror http://127.0.0.1:33413 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-843469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-843469
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (58.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-017520 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-017520 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (55.677031924s)
helpers_test.go:175: Cleaning up "offline-crio-017520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-017520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-017520: (2.369907636s)
--- PASS: TestOffline (58.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-454931
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-454931: exit status 85 (49.864276ms)

                                                
                                                
-- stdout --
	* Profile "addons-454931" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-454931"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-454931
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-454931: exit status 85 (48.734944ms)

                                                
                                                
-- stdout --
	* Profile "addons-454931" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-454931"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (193.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-454931 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-454931 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m13.023414839s)
--- PASS: TestAddons/Setup (193.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-454931 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-454931 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.524115ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-v7654" [d56000ae-59d9-4ff4-afc3-c173d1aa817f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002357362s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sjwlk" [497530f4-1b24-4840-a1d3-6d7174146af0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003649961s
addons_test.go:342: (dbg) Run:  kubectl --context addons-454931 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-454931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-454931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.896007871s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 ip
2024/08/19 10:52:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.18s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jzwrj" [ef34f616-efe5-4d9e-9c1a-cc029f6a8b21] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00402403s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-454931
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-454931: (6.171224819s)
--- PASS: TestAddons/parallel/InspektorGadget (11.18s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.377817ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-cdqdx" [e734e815-6d31-40f3-98f0-cc7c3f38ba44] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00318495s
addons_test.go:475: (dbg) Run:  kubectl --context addons-454931 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-454931 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.315798473s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.747367ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-454931 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-454931 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eafa859d-0e04-44ce-abb8-d4be85a1c3aa] Pending
helpers_test.go:344: "task-pv-pod" [eafa859d-0e04-44ce-abb8-d4be85a1c3aa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eafa859d-0e04-44ce-abb8-d4be85a1c3aa] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004073298s
addons_test.go:590: (dbg) Run:  kubectl --context addons-454931 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-454931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-454931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-454931 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-454931 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-454931 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-454931 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9ad8353c-9b8b-4579-81cc-5a66e3733200] Pending
helpers_test.go:344: "task-pv-pod-restore" [9ad8353c-9b8b-4579-81cc-5a66e3733200] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9ad8353c-9b8b-4579-81cc-5a66e3733200] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.0032607s
addons_test.go:632: (dbg) Run:  kubectl --context addons-454931 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-454931 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-454931 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.584014339s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-454931 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-454931 --alsologtostderr -v=1: (1.009137252s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nqkvj" [9e18ef25-29a2-4236-8d0c-71437898a75b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nqkvj" [9e18ef25-29a2-4236-8d0c-71437898a75b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003173708s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 addons disable headlamp --alsologtostderr -v=1: (5.59349813s)
--- PASS: TestAddons/parallel/Headlamp (18.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-thbcw" [8f1f86a2-c2c8-4e33-926a-f99a34dfc55b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003060653s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-454931
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-454931 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-454931 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [274ed9aa-9ca7-4b53-9b5f-a34103a8123e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [274ed9aa-9ca7-4b53-9b5f-a34103a8123e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [274ed9aa-9ca7-4b53-9b5f-a34103a8123e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003372725s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-454931 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 ssh "cat /opt/local-path-provisioner/pvc-6f8c5a14-e9d6-473e-8f6f-d18080db96da_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-454931 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-454931 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4xgtg" [9f3c31d4-b4dd-4fc8-b9c4-1ca0c24775c8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003242326s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-454931
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lx49h" [cab851ae-f252-457f-abf2-3941a23a1e1d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003440148s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-454931 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-454931 addons disable yakd --alsologtostderr -v=1: (5.739422909s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-454931
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-454931: (11.81621999s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-454931
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-454931
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-454931
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (24.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-799638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-799638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.314912683s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-799638 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-799638 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-799638 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-799638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-799638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-799638: (1.94998594s)
--- PASS: TestCertOptions (24.90s)

                                                
                                    
x
+
TestCertExpiration (220.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-701282 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-701282 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.726170114s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-701282 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-701282 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.806287851s)
helpers_test.go:175: Cleaning up "cert-expiration-701282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-701282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-701282: (2.449058576s)
--- PASS: TestCertExpiration (220.98s)

                                                
                                    
x
+
TestForceSystemdFlag (31.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-508981 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-508981 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.999679687s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-508981 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-508981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-508981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-508981: (2.331471164s)
--- PASS: TestForceSystemdFlag (31.66s)

                                                
                                    
x
+
TestForceSystemdEnv (36.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-092740 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-092740 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.94425833s)
helpers_test.go:175: Cleaning up "force-systemd-env-092740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-092740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-092740: (2.265826707s)
--- PASS: TestForceSystemdEnv (36.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.50s)

                                                
                                    
x
+
TestErrorSpam/setup (23.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-861720 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-861720 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-861720 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-861720 --driver=docker  --container-runtime=crio: (23.37675542s)
--- PASS: TestErrorSpam/setup (23.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 stop: (1.167536935s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861720 --log_dir /tmp/nospam-861720 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19476-9624/.minikube/files/etc/test/nested/copy/16413/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-675456 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.799719124s)
--- PASS: TestFunctional/serial/StartWithProxy (45.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-675456 --alsologtostderr -v=8: (35.634439396s)
functional_test.go:663: soft start took 35.63520056s for "functional-675456" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-675456 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:3.1: (1.268476865s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:3.3: (1.239000329s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 cache add registry.k8s.io/pause:latest: (1.253547919s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-675456 /tmp/TestFunctionalserialCacheCmdcacheadd_local850645800/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache add minikube-local-cache-test:functional-675456
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 cache add minikube-local-cache-test:functional-675456: (1.683446299s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache delete minikube-local-cache-test:functional-675456
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-675456
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.06405ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 cache reload: (1.008307575s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 kubectl -- --context functional-675456 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-675456 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-675456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.126788554s)
functional_test.go:761: restart took 50.126915459s for "functional-675456" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-675456 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 logs: (1.335092548s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 logs --file /tmp/TestFunctionalserialLogsFileCmd1844401186/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 logs --file /tmp/TestFunctionalserialLogsFileCmd1844401186/001/logs.txt: (1.363609988s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-675456 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-675456
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-675456: exit status 115 (317.208147ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30922 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-675456 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 config get cpus: exit status 14 (74.532166ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 config get cpus: exit status 14 (62.198649ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-675456 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-675456 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 54010: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-675456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.646055ms)

                                                
                                                
-- stdout --
	* [functional-675456] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:01:45.751043   52640 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:01:45.751181   52640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:01:45.751197   52640 out.go:358] Setting ErrFile to fd 2...
	I0819 11:01:45.751202   52640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:01:45.751368   52640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:01:45.751929   52640 out.go:352] Setting JSON to false
	I0819 11:01:45.752924   52640 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2646,"bootTime":1724062660,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:01:45.752988   52640 start.go:139] virtualization: kvm guest
	I0819 11:01:45.755155   52640 out.go:177] * [functional-675456] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:01:45.756832   52640 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:01:45.756836   52640 notify.go:220] Checking for updates...
	I0819 11:01:45.759022   52640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:01:45.760259   52640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 11:01:45.761308   52640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 11:01:45.762242   52640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:01:45.763185   52640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:01:45.765216   52640 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:01:45.765702   52640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:01:45.792795   52640 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:01:45.792961   52640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:01:45.863163   52640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 11:01:45.852711364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:01:45.863315   52640 docker.go:307] overlay module found
	I0819 11:01:45.866141   52640 out.go:177] * Using the docker driver based on existing profile
	I0819 11:01:45.867264   52640 start.go:297] selected driver: docker
	I0819 11:01:45.867290   52640 start.go:901] validating driver "docker" against &{Name:functional-675456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-675456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:01:45.867377   52640 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:01:45.869663   52640 out.go:201] 
	W0819 11:01:45.870896   52640 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 11:01:45.872210   52640 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-675456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (185.941654ms)

                                                
                                                
-- stdout --
	* [functional-675456] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:01:45.582875   52432 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:01:45.583078   52432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:01:45.583126   52432 out.go:358] Setting ErrFile to fd 2...
	I0819 11:01:45.583144   52432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:01:45.583604   52432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:01:45.584401   52432 out.go:352] Setting JSON to false
	I0819 11:01:45.585934   52432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2646,"bootTime":1724062660,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:01:45.586059   52432 start.go:139] virtualization: kvm guest
	I0819 11:01:45.588655   52432 out.go:177] * [functional-675456] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 11:01:45.591385   52432 notify.go:220] Checking for updates...
	I0819 11:01:45.592212   52432 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:01:45.593847   52432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:01:45.599701   52432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 11:01:45.601259   52432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 11:01:45.602494   52432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:01:45.603842   52432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:01:45.605763   52432 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:01:45.606263   52432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:01:45.634633   52432 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:01:45.634875   52432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:01:45.698015   52432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 11:01:45.687268934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:01:45.698124   52432 docker.go:307] overlay module found
	I0819 11:01:45.700016   52432 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 11:01:45.701299   52432 start.go:297] selected driver: docker
	I0819 11:01:45.701328   52432 start.go:901] validating driver "docker" against &{Name:functional-675456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-675456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:01:45.701426   52432 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:01:45.703368   52432 out.go:201] 
	W0819 11:01:45.704615   52432 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 11:01:45.705771   52432 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-675456 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-675456 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4xtdj" [325ef796-02d5-488f-9ef2-91b56f52c457] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4xtdj" [325ef796-02d5-488f-9ef2-91b56f52c457] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.03655137s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service hello-node-connect --url
E0819 11:02:08.593301   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30265
functional_test.go:1675: http://192.168.49.2:30265: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4xtdj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30265
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e1ce15cf-08b0-43f6-8f1a-fadce075262a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003840377s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-675456 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-675456 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-675456 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-675456 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f941a571-4866-4a9e-880f-0223fc223bb0] Pending
helpers_test.go:344: "sp-pod" [f941a571-4866-4a9e-880f-0223fc223bb0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0819 11:02:03.461673   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.468860   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.480287   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.501829   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.543228   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.624709   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:03.786228   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:04.107760   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:04.749142   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:02:06.031494   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [f941a571-4866-4a9e-880f-0223fc223bb0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.00392975s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-675456 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-675456 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-675456 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aee53c8f-4924-4b71-8984-161fbbec0626] Pending
helpers_test.go:344: "sp-pod" [aee53c8f-4924-4b71-8984-161fbbec0626] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aee53c8f-4924-4b71-8984-161fbbec0626] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004004846s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-675456 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "cat /etc/hostname"
2024/08/19 11:01:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh -n functional-675456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cp functional-675456:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd573178341/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh -n functional-675456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh -n functional-675456 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-675456 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hqkl2" [8a99f289-d0df-43b9-b02a-a19d34036c4c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hqkl2" [8a99f289-d0df-43b9-b02a-a19d34036c4c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.035253385s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-675456 exec mysql-6cdb49bbb-hqkl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-675456 exec mysql-6cdb49bbb-hqkl2 -- mysql -ppassword -e "show databases;": exit status 1 (217.42838ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-675456 exec mysql-6cdb49bbb-hqkl2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16413/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /etc/test/nested/copy/16413/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16413.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /etc/ssl/certs/16413.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16413.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /usr/share/ca-certificates/16413.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/164132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /etc/ssl/certs/164132.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/164132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /usr/share/ca-certificates/164132.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-675456 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "sudo systemctl is-active docker": exit status 1 (336.461135ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "sudo systemctl is-active containerd": exit status 1 (400.872997ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-675456 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-675456 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2zx2p" [0b007ef1-81ce-4dd5-9ea3-dab59d4d9698] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2zx2p" [0b007ef1-81ce-4dd5-9ea3-dab59d4d9698] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004457296s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675456 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-675456
localhost/kicbase/echo-server:functional-675456
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675456 image ls --format short --alsologtostderr:
I0819 11:02:18.909610   58934 out.go:345] Setting OutFile to fd 1 ...
I0819 11:02:18.909917   58934 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:18.909929   58934 out.go:358] Setting ErrFile to fd 2...
I0819 11:02:18.909937   58934 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:18.910235   58934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
I0819 11:02:18.911005   58934 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:18.911112   58934 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:18.911574   58934 cli_runner.go:164] Run: docker container inspect functional-675456 --format={{.State.Status}}
I0819 11:02:18.929136   58934 ssh_runner.go:195] Run: systemctl --version
I0819 11:02:18.929199   58934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675456
I0819 11:02:18.947800   58934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/functional-675456/id_rsa Username:docker}
I0819 11:02:19.034261   58934 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675456 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
| localhost/minikube-local-cache-test     | functional-675456  | 09f713337bbc9 | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/kicbase/echo-server           | functional-675456  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675456 image ls --format table --alsologtostderr:
I0819 11:02:20.757491   59406 out.go:345] Setting OutFile to fd 1 ...
I0819 11:02:20.757801   59406 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:20.757812   59406 out.go:358] Setting ErrFile to fd 2...
I0819 11:02:20.757816   59406 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:20.758003   59406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
I0819 11:02:20.758564   59406 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:20.758656   59406 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:20.759049   59406 cli_runner.go:164] Run: docker container inspect functional-675456 --format={{.State.Status}}
I0819 11:02:20.776346   59406 ssh_runner.go:195] Run: systemctl --version
I0819 11:02:20.776391   59406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675456
I0819 11:02:20.793855   59406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/functional-675456/id_rsa Username:docker}
I0819 11:02:20.878461   59406 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675456 image ls --format json --alsologtostderr:
[{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5
b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247
077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"ad83b2ca7b09e6162f96f933eecded731
cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@
sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-675456"],"size":"4943877"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c
105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d
250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"09f713337bbc9d5e94d3c1c6ffcc8f493baa13793c3ba5f24f12f95350eaa976","repoDigests":["localhost/minikube-local-cache-test@sha256:fdfcbeac1e1f7df28047efaf5257c7c02165d6d25d3a34448a30812170cd4ea2"],"repoTags":["localhost/minikube-local-cache-test:functional-675456"],"size":"3328"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675456 image ls --format json --alsologtostderr:
I0819 11:02:20.557766   59354 out.go:345] Setting OutFile to fd 1 ...
I0819 11:02:20.557899   59354 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:20.557909   59354 out.go:358] Setting ErrFile to fd 2...
I0819 11:02:20.557914   59354 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:20.558129   59354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
I0819 11:02:20.558693   59354 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:20.558803   59354 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:20.559203   59354 cli_runner.go:164] Run: docker container inspect functional-675456 --format={{.State.Status}}
I0819 11:02:20.576774   59354 ssh_runner.go:195] Run: systemctl --version
I0819 11:02:20.576820   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675456
I0819 11:02:20.594604   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/functional-675456/id_rsa Username:docker}
I0819 11:02:20.677944   59354 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675456 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-675456
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 09f713337bbc9d5e94d3c1c6ffcc8f493baa13793c3ba5f24f12f95350eaa976
repoDigests:
- localhost/minikube-local-cache-test@sha256:fdfcbeac1e1f7df28047efaf5257c7c02165d6d25d3a34448a30812170cd4ea2
repoTags:
- localhost/minikube-local-cache-test:functional-675456
size: "3328"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675456 image ls --format yaml --alsologtostderr:
I0819 11:02:19.127729   58981 out.go:345] Setting OutFile to fd 1 ...
I0819 11:02:19.127836   58981 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:19.127843   58981 out.go:358] Setting ErrFile to fd 2...
I0819 11:02:19.127849   58981 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:19.128041   58981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
I0819 11:02:19.128613   58981 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:19.128725   58981 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:19.129117   58981 cli_runner.go:164] Run: docker container inspect functional-675456 --format={{.State.Status}}
I0819 11:02:19.146752   58981 ssh_runner.go:195] Run: systemctl --version
I0819 11:02:19.146826   58981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675456
I0819 11:02:19.164486   58981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/functional-675456/id_rsa Username:docker}
I0819 11:02:19.258843   58981 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh pgrep buildkitd: exit status 1 (251.062505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image build -t localhost/my-image:functional-675456 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 image build -t localhost/my-image:functional-675456 testdata/build --alsologtostderr: (2.605089887s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675456 image build -t localhost/my-image:functional-675456 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8cb07860257
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-675456
--> 9978b171549
Successfully tagged localhost/my-image:functional-675456
9978b1715499e7d0435f1710e9a13613c3c143ba47641720a2be7fc73bc7cf2e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675456 image build -t localhost/my-image:functional-675456 testdata/build --alsologtostderr:
I0819 11:02:19.594809   59182 out.go:345] Setting OutFile to fd 1 ...
I0819 11:02:19.594947   59182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:19.594956   59182 out.go:358] Setting ErrFile to fd 2...
I0819 11:02:19.594960   59182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:02:19.595143   59182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
I0819 11:02:19.595678   59182 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:19.596232   59182 config.go:182] Loaded profile config "functional-675456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:02:19.596656   59182 cli_runner.go:164] Run: docker container inspect functional-675456 --format={{.State.Status}}
I0819 11:02:19.615117   59182 ssh_runner.go:195] Run: systemctl --version
I0819 11:02:19.615159   59182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675456
I0819 11:02:19.634386   59182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/functional-675456/id_rsa Username:docker}
I0819 11:02:19.726056   59182 build_images.go:161] Building image from path: /tmp/build.55487581.tar
I0819 11:02:19.726149   59182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 11:02:19.736007   59182 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.55487581.tar
I0819 11:02:19.739779   59182 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.55487581.tar: stat -c "%s %y" /var/lib/minikube/build/build.55487581.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.55487581.tar': No such file or directory
I0819 11:02:19.739811   59182 ssh_runner.go:362] scp /tmp/build.55487581.tar --> /var/lib/minikube/build/build.55487581.tar (3072 bytes)
I0819 11:02:19.762938   59182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.55487581
I0819 11:02:19.772088   59182 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.55487581 -xf /var/lib/minikube/build/build.55487581.tar
I0819 11:02:19.781224   59182 crio.go:315] Building image: /var/lib/minikube/build/build.55487581
I0819 11:02:19.781321   59182 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-675456 /var/lib/minikube/build/build.55487581 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 11:02:22.135574   59182 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-675456 /var/lib/minikube/build/build.55487581 --cgroup-manager=cgroupfs: (2.354224256s)
I0819 11:02:22.135645   59182 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.55487581
I0819 11:02:22.144188   59182 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.55487581.tar
I0819 11:02:22.152591   59182 build_images.go:217] Built localhost/my-image:functional-675456 from /tmp/build.55487581.tar
I0819 11:02:22.152620   59182 build_images.go:133] succeeded building to: functional-675456
I0819 11:02:22.152625   59182 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.74311909s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-675456
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image load --daemon kicbase/echo-server:functional-675456 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-675456 image load --daemon kicbase/echo-server:functional-675456 --alsologtostderr: (1.182198583s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image load --daemon kicbase/echo-server:functional-675456 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-675456
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image load --daemon kicbase/echo-server:functional-675456 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image save kicbase/echo-server:functional-675456 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image rm kicbase/echo-server:functional-675456 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-675456
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 image save --daemon kicbase/echo-server:functional-675456 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-675456
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "287.452134ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.602411ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "295.739973ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.277467ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service list -o json
functional_test.go:1494: Took "912.6819ms" to run "out/minikube-linux-amd64 -p functional-675456 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31335
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31335
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 56934: os: process already finished
helpers_test.go:502: unable to terminate pid 56738: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-675456 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3ff87439-0680-4180-bca9-ad3438453f03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3ff87439-0680-4180-bca9-ad3438453f03] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.004144284s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdany-port1863009461/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724065328803264257" to /tmp/TestFunctionalparallelMountCmdany-port1863009461/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724065328803264257" to /tmp/TestFunctionalparallelMountCmdany-port1863009461/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724065328803264257" to /tmp/TestFunctionalparallelMountCmdany-port1863009461/001/test-1724065328803264257
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.567283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 11:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 11:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 11:02 test-1724065328803264257
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh cat /mount-9p/test-1724065328803264257
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-675456 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3c39df07-ccd1-4084-83cb-a87f55a10b5f] Pending
helpers_test.go:344: "busybox-mount" [3c39df07-ccd1-4084-83cb-a87f55a10b5f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0819 11:02:13.715051   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [3c39df07-ccd1-4084-83cb-a87f55a10b5f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3c39df07-ccd1-4084-83cb-a87f55a10b5f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.00329324s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-675456 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh stat /mount-9p/created-by-pod
E0819 11:02:23.957400   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdany-port1863009461/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-675456 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.89.62 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-675456 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdspecific-port2508108395/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.970554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdspecific-port2508108395/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "sudo umount -f /mount-9p": exit status 1 (243.829284ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-675456 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdspecific-port2508108395/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T" /mount1: exit status 1 (297.042687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-675456 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-675456 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup694149403/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-675456
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-675456
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-675456
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (105.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-954317 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 11:02:44.439393   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:03:25.401590   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-954317 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m44.859744s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (105.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-954317 -- rollout status deployment/busybox: (4.22572057s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-j9dgt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-jqxsf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-v6lrd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-j9dgt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-jqxsf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-v6lrd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-j9dgt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-jqxsf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-v6lrd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-j9dgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-j9dgt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-jqxsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-jqxsf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-v6lrd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-954317 -- exec busybox-7dff88458-v6lrd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-954317 -v=7 --alsologtostderr
E0819 11:04:47.323927   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-954317 -v=7 --alsologtostderr: (35.516191093s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-954317 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp testdata/cp-test.txt ha-954317:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile926143797/001/cp-test_ha-954317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317:/home/docker/cp-test.txt ha-954317-m02:/home/docker/cp-test_ha-954317_ha-954317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test_ha-954317_ha-954317-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317:/home/docker/cp-test.txt ha-954317-m03:/home/docker/cp-test_ha-954317_ha-954317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test_ha-954317_ha-954317-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317:/home/docker/cp-test.txt ha-954317-m04:/home/docker/cp-test_ha-954317_ha-954317-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test_ha-954317_ha-954317-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp testdata/cp-test.txt ha-954317-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile926143797/001/cp-test_ha-954317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m02:/home/docker/cp-test.txt ha-954317:/home/docker/cp-test_ha-954317-m02_ha-954317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test_ha-954317-m02_ha-954317.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m02:/home/docker/cp-test.txt ha-954317-m03:/home/docker/cp-test_ha-954317-m02_ha-954317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test_ha-954317-m02_ha-954317-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m02:/home/docker/cp-test.txt ha-954317-m04:/home/docker/cp-test_ha-954317-m02_ha-954317-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test_ha-954317-m02_ha-954317-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp testdata/cp-test.txt ha-954317-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile926143797/001/cp-test_ha-954317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m03:/home/docker/cp-test.txt ha-954317:/home/docker/cp-test_ha-954317-m03_ha-954317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test_ha-954317-m03_ha-954317.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m03:/home/docker/cp-test.txt ha-954317-m02:/home/docker/cp-test_ha-954317-m03_ha-954317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test_ha-954317-m03_ha-954317-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m03:/home/docker/cp-test.txt ha-954317-m04:/home/docker/cp-test_ha-954317-m03_ha-954317-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test_ha-954317-m03_ha-954317-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp testdata/cp-test.txt ha-954317-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile926143797/001/cp-test_ha-954317-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m04:/home/docker/cp-test.txt ha-954317:/home/docker/cp-test_ha-954317-m04_ha-954317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317 "sudo cat /home/docker/cp-test_ha-954317-m04_ha-954317.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m04:/home/docker/cp-test.txt ha-954317-m02:/home/docker/cp-test_ha-954317-m04_ha-954317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m02 "sudo cat /home/docker/cp-test_ha-954317-m04_ha-954317-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 cp ha-954317-m04:/home/docker/cp-test.txt ha-954317-m03:/home/docker/cp-test_ha-954317-m04_ha-954317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 ssh -n ha-954317-m03 "sudo cat /home/docker/cp-test_ha-954317-m04_ha-954317-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-954317 node stop m02 -v=7 --alsologtostderr: (11.918357325s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr: exit status 7 (634.712335ms)

                                                
                                                
-- stdout --
	ha-954317
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-954317-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-954317-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-954317-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:05:36.540205   81972 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:05:36.540328   81972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:36.540337   81972 out.go:358] Setting ErrFile to fd 2...
	I0819 11:05:36.540341   81972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:36.540537   81972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:05:36.540714   81972 out.go:352] Setting JSON to false
	I0819 11:05:36.540741   81972 mustload.go:65] Loading cluster: ha-954317
	I0819 11:05:36.540804   81972 notify.go:220] Checking for updates...
	I0819 11:05:36.541274   81972 config.go:182] Loaded profile config "ha-954317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:05:36.541296   81972 status.go:255] checking status of ha-954317 ...
	I0819 11:05:36.541836   81972 cli_runner.go:164] Run: docker container inspect ha-954317 --format={{.State.Status}}
	I0819 11:05:36.558942   81972 status.go:330] ha-954317 host status = "Running" (err=<nil>)
	I0819 11:05:36.558965   81972 host.go:66] Checking if "ha-954317" exists ...
	I0819 11:05:36.559249   81972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-954317
	I0819 11:05:36.577430   81972 host.go:66] Checking if "ha-954317" exists ...
	I0819 11:05:36.577689   81972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:05:36.577729   81972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-954317
	I0819 11:05:36.597303   81972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/ha-954317/id_rsa Username:docker}
	I0819 11:05:36.682485   81972 ssh_runner.go:195] Run: systemctl --version
	I0819 11:05:36.686241   81972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:05:36.695867   81972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:05:36.746780   81972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-19 11:05:36.737432203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:05:36.747312   81972 kubeconfig.go:125] found "ha-954317" server: "https://192.168.49.254:8443"
	I0819 11:05:36.747339   81972 api_server.go:166] Checking apiserver status ...
	I0819 11:05:36.747377   81972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:05:36.758489   81972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1521/cgroup
	I0819 11:05:36.767681   81972 api_server.go:182] apiserver freezer: "4:freezer:/docker/c69a8fc50c946c871fdfcf85db1b2eecde3784466279979c1cdad08772762ac7/crio/crio-4115d323f6e571bcfa891086c66ceb299705c10465abdbae19f5561f8eacbc46"
	I0819 11:05:36.767750   81972 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c69a8fc50c946c871fdfcf85db1b2eecde3784466279979c1cdad08772762ac7/crio/crio-4115d323f6e571bcfa891086c66ceb299705c10465abdbae19f5561f8eacbc46/freezer.state
	I0819 11:05:36.776095   81972 api_server.go:204] freezer state: "THAWED"
	I0819 11:05:36.776125   81972 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 11:05:36.779891   81972 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 11:05:36.779914   81972 status.go:422] ha-954317 apiserver status = Running (err=<nil>)
	I0819 11:05:36.779923   81972 status.go:257] ha-954317 status: &{Name:ha-954317 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:05:36.779939   81972 status.go:255] checking status of ha-954317-m02 ...
	I0819 11:05:36.780167   81972 cli_runner.go:164] Run: docker container inspect ha-954317-m02 --format={{.State.Status}}
	I0819 11:05:36.798286   81972 status.go:330] ha-954317-m02 host status = "Stopped" (err=<nil>)
	I0819 11:05:36.798323   81972 status.go:343] host is not running, skipping remaining checks
	I0819 11:05:36.798335   81972 status.go:257] ha-954317-m02 status: &{Name:ha-954317-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:05:36.798361   81972 status.go:255] checking status of ha-954317-m03 ...
	I0819 11:05:36.798621   81972 cli_runner.go:164] Run: docker container inspect ha-954317-m03 --format={{.State.Status}}
	I0819 11:05:36.815710   81972 status.go:330] ha-954317-m03 host status = "Running" (err=<nil>)
	I0819 11:05:36.815733   81972 host.go:66] Checking if "ha-954317-m03" exists ...
	I0819 11:05:36.815967   81972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-954317-m03
	I0819 11:05:36.833071   81972 host.go:66] Checking if "ha-954317-m03" exists ...
	I0819 11:05:36.833321   81972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:05:36.833365   81972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-954317-m03
	I0819 11:05:36.850416   81972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/ha-954317-m03/id_rsa Username:docker}
	I0819 11:05:36.934626   81972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:05:36.944810   81972 kubeconfig.go:125] found "ha-954317" server: "https://192.168.49.254:8443"
	I0819 11:05:36.944835   81972 api_server.go:166] Checking apiserver status ...
	I0819 11:05:36.944861   81972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:05:36.954423   81972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	I0819 11:05:36.963368   81972 api_server.go:182] apiserver freezer: "4:freezer:/docker/ad64d9e9be13cf231b09d928236788c6b316326c3cfd335e46b42d77cfb9320d/crio/crio-16503e17d6771c59b759a8c49c0e22f2a077b7fdeac20193a4f1c7faa8ba9f77"
	I0819 11:05:36.963432   81972 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ad64d9e9be13cf231b09d928236788c6b316326c3cfd335e46b42d77cfb9320d/crio/crio-16503e17d6771c59b759a8c49c0e22f2a077b7fdeac20193a4f1c7faa8ba9f77/freezer.state
	I0819 11:05:36.971814   81972 api_server.go:204] freezer state: "THAWED"
	I0819 11:05:36.971847   81972 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 11:05:36.975670   81972 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 11:05:36.975693   81972 status.go:422] ha-954317-m03 apiserver status = Running (err=<nil>)
	I0819 11:05:36.975700   81972 status.go:257] ha-954317-m03 status: &{Name:ha-954317-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:05:36.975715   81972 status.go:255] checking status of ha-954317-m04 ...
	I0819 11:05:36.975935   81972 cli_runner.go:164] Run: docker container inspect ha-954317-m04 --format={{.State.Status}}
	I0819 11:05:36.993540   81972 status.go:330] ha-954317-m04 host status = "Running" (err=<nil>)
	I0819 11:05:36.993568   81972 host.go:66] Checking if "ha-954317-m04" exists ...
	I0819 11:05:36.993929   81972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-954317-m04
	I0819 11:05:37.011504   81972 host.go:66] Checking if "ha-954317-m04" exists ...
	I0819 11:05:37.011757   81972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:05:37.011799   81972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-954317-m04
	I0819 11:05:37.029838   81972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/ha-954317-m04/id_rsa Username:docker}
	I0819 11:05:37.118875   81972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:05:37.130013   81972 status.go:257] ha-954317-m04 status: &{Name:ha-954317-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-954317 node start m02 -v=7 --alsologtostderr: (21.060269416s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr: (1.365698525s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.528162782s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-954317 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-954317 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-954317 -v=7 --alsologtostderr: (36.501917359s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-954317 --wait=true -v=7 --alsologtostderr
E0819 11:06:45.356752   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.363172   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.374824   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.396217   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.438019   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.519867   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:45.682094   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:46.004211   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:46.646368   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:47.928496   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:50.490145   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:06:55.612238   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:07:03.461782   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:07:05.854354   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:07:26.336252   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:07:31.165911   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:08:07.298500   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-954317 --wait=true -v=7 --alsologtostderr: (1m40.356281581s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-954317
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-954317 node delete m03 -v=7 --alsologtostderr: (10.595028692s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-954317 stop -v=7 --alsologtostderr: (35.449448417s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr: exit status 7 (102.263045ms)

                                                
                                                
-- stdout --
	ha-954317
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-954317-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-954317-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:09:06.882408   98819 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:09:06.882671   98819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:09:06.882682   98819 out.go:358] Setting ErrFile to fd 2...
	I0819 11:09:06.882686   98819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:09:06.882871   98819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:09:06.883031   98819 out.go:352] Setting JSON to false
	I0819 11:09:06.883058   98819 mustload.go:65] Loading cluster: ha-954317
	I0819 11:09:06.883113   98819 notify.go:220] Checking for updates...
	I0819 11:09:06.883601   98819 config.go:182] Loaded profile config "ha-954317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:09:06.883627   98819 status.go:255] checking status of ha-954317 ...
	I0819 11:09:06.884087   98819 cli_runner.go:164] Run: docker container inspect ha-954317 --format={{.State.Status}}
	I0819 11:09:06.905388   98819 status.go:330] ha-954317 host status = "Stopped" (err=<nil>)
	I0819 11:09:06.905415   98819 status.go:343] host is not running, skipping remaining checks
	I0819 11:09:06.905423   98819 status.go:257] ha-954317 status: &{Name:ha-954317 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:09:06.905468   98819 status.go:255] checking status of ha-954317-m02 ...
	I0819 11:09:06.905854   98819 cli_runner.go:164] Run: docker container inspect ha-954317-m02 --format={{.State.Status}}
	I0819 11:09:06.923671   98819 status.go:330] ha-954317-m02 host status = "Stopped" (err=<nil>)
	I0819 11:09:06.923709   98819 status.go:343] host is not running, skipping remaining checks
	I0819 11:09:06.923731   98819 status.go:257] ha-954317-m02 status: &{Name:ha-954317-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:09:06.923757   98819 status.go:255] checking status of ha-954317-m04 ...
	I0819 11:09:06.924017   98819 cli_runner.go:164] Run: docker container inspect ha-954317-m04 --format={{.State.Status}}
	I0819 11:09:06.941478   98819 status.go:330] ha-954317-m04 host status = "Stopped" (err=<nil>)
	I0819 11:09:06.941499   98819 status.go:343] host is not running, skipping remaining checks
	I0819 11:09:06.941505   98819 status.go:257] ha-954317-m04 status: &{Name:ha-954317-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-954317 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 11:09:29.220760   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-954317 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (57.301571685s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-954317 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-954317 --control-plane -v=7 --alsologtostderr: (50.179964333s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-954317 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (51.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-884259 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0819 11:11:45.356220   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-884259 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (45.068425546s)
--- PASS: TestJSONOutput/start/Command (45.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-884259 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-884259 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-884259 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-884259 --output=json --user=testUser: (5.680334347s)
--- PASS: TestJSONOutput/stop/Command (5.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-782238 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-782238 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.800662ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86e1e682-5e1d-43e3-ab17-99eaabc1b26f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-782238] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59190d91-2eae-48ea-a228-37e16bd7ef2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"8823331b-02ce-410d-afca-d756947cd10b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d9097e6a-f2a4-48c4-9b8e-b9f4363d45d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig"}}
	{"specversion":"1.0","id":"5237d8fc-e740-4bcf-9d2c-66f314b3987c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube"}}
	{"specversion":"1.0","id":"f2502a1f-d533-42dc-a503-b33b9cc2e5a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"25af7213-3fea-4d95-a079-cc90faa9345e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68acc990-2a21-4435-9d62-5a8099577295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-782238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-782238
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-987962 --network=
E0819 11:12:03.461375   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:12:13.062742   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-987962 --network=: (34.981402597s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-987962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-987962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-987962: (2.070331679s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-563446 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-563446 --network=bridge: (21.462436445s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-563446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-563446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-563446: (1.929794761s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.41s)

                                                
                                    
x
+
TestKicExistingNetwork (26.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-659332 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-659332 --network=existing-network: (24.413143648s)
helpers_test.go:175: Cleaning up "existing-network-659332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-659332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-659332: (1.541159873s)
--- PASS: TestKicExistingNetwork (26.10s)

                                                
                                    
x
+
TestKicCustomSubnet (27.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-596126 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-596126 --subnet=192.168.60.0/24: (25.074322045s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-596126 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-596126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-596126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-596126: (2.016124498s)
--- PASS: TestKicCustomSubnet (27.11s)

                                                
                                    
x
+
TestKicStaticIP (23.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-440074 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-440074 --static-ip=192.168.200.200: (21.54767111s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-440074 ip
helpers_test.go:175: Cleaning up "static-ip-440074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-440074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-440074: (2.021675052s)
--- PASS: TestKicStaticIP (23.69s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-824775 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-824775 --driver=docker  --container-runtime=crio: (20.821086139s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-827265 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-827265 --driver=docker  --container-runtime=crio: (25.013941121s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-824775
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-827265
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-827265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-827265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-827265: (1.879349866s)
helpers_test.go:175: Cleaning up "first-824775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-824775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-824775: (2.207736446s)
--- PASS: TestMinikubeProfile (51.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-993666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-993666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.592626759s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-993666 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.917508874s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-993666 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-993666 --alsologtostderr -v=5: (1.59979152s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-008001
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-008001: (1.171517225s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008001
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008001: (6.691456009s)
--- PASS: TestMountStart/serial/RestartStopped (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-497926 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 11:16:45.356625   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-497926 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.036856715s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-497926 -- rollout status deployment/busybox: (3.218405498s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-2znw5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-zbl27 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-2znw5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-zbl27 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-2znw5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-zbl27 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-2znw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-2znw5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-zbl27 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-497926 -- exec busybox-7dff88458-zbl27 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-497926 -v 3 --alsologtostderr
E0819 11:17:03.461835   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-497926 -v 3 --alsologtostderr: (28.988375132s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-497926 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp testdata/cp-test.txt multinode-497926:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415079081/001/cp-test_multinode-497926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926:/home/docker/cp-test.txt multinode-497926-m02:/home/docker/cp-test_multinode-497926_multinode-497926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test_multinode-497926_multinode-497926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926:/home/docker/cp-test.txt multinode-497926-m03:/home/docker/cp-test_multinode-497926_multinode-497926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test_multinode-497926_multinode-497926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp testdata/cp-test.txt multinode-497926-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415079081/001/cp-test_multinode-497926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m02:/home/docker/cp-test.txt multinode-497926:/home/docker/cp-test_multinode-497926-m02_multinode-497926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test_multinode-497926-m02_multinode-497926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m02:/home/docker/cp-test.txt multinode-497926-m03:/home/docker/cp-test_multinode-497926-m02_multinode-497926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test_multinode-497926-m02_multinode-497926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp testdata/cp-test.txt multinode-497926-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415079081/001/cp-test_multinode-497926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m03:/home/docker/cp-test.txt multinode-497926:/home/docker/cp-test_multinode-497926-m03_multinode-497926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926 "sudo cat /home/docker/cp-test_multinode-497926-m03_multinode-497926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 cp multinode-497926-m03:/home/docker/cp-test.txt multinode-497926-m02:/home/docker/cp-test_multinode-497926-m03_multinode-497926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 ssh -n multinode-497926-m02 "sudo cat /home/docker/cp-test_multinode-497926-m03_multinode-497926-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-497926 node stop m03: (1.172746993s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-497926 status: exit status 7 (444.511453ms)

                                                
                                                
-- stdout --
	multinode-497926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-497926-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-497926-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr: exit status 7 (454.205196ms)

                                                
                                                
-- stdout --
	multinode-497926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-497926-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-497926-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:17:39.521248  163905 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:17:39.521499  163905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:17:39.521507  163905 out.go:358] Setting ErrFile to fd 2...
	I0819 11:17:39.521511  163905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:17:39.521717  163905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:17:39.521884  163905 out.go:352] Setting JSON to false
	I0819 11:17:39.521913  163905 mustload.go:65] Loading cluster: multinode-497926
	I0819 11:17:39.522036  163905 notify.go:220] Checking for updates...
	I0819 11:17:39.522303  163905 config.go:182] Loaded profile config "multinode-497926": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:17:39.522317  163905 status.go:255] checking status of multinode-497926 ...
	I0819 11:17:39.522688  163905 cli_runner.go:164] Run: docker container inspect multinode-497926 --format={{.State.Status}}
	I0819 11:17:39.542043  163905 status.go:330] multinode-497926 host status = "Running" (err=<nil>)
	I0819 11:17:39.542093  163905 host.go:66] Checking if "multinode-497926" exists ...
	I0819 11:17:39.542415  163905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-497926
	I0819 11:17:39.560180  163905 host.go:66] Checking if "multinode-497926" exists ...
	I0819 11:17:39.560478  163905 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:17:39.560538  163905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-497926
	I0819 11:17:39.579395  163905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/multinode-497926/id_rsa Username:docker}
	I0819 11:17:39.666420  163905 ssh_runner.go:195] Run: systemctl --version
	I0819 11:17:39.670256  163905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:17:39.680188  163905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:17:39.730237  163905 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-19 11:17:39.72079338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:17:39.730778  163905 kubeconfig.go:125] found "multinode-497926" server: "https://192.168.67.2:8443"
	I0819 11:17:39.730804  163905 api_server.go:166] Checking apiserver status ...
	I0819 11:17:39.730835  163905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:17:39.741490  163905 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1485/cgroup
	I0819 11:17:39.750248  163905 api_server.go:182] apiserver freezer: "4:freezer:/docker/21fc8911c3f6c83fd0285acf6ab3747017ef3928ee44099a56fa0480a12512d9/crio/crio-890a6a1b4272911cdc55545256899f521808e24da7202d140953126a12de20f1"
	I0819 11:17:39.750342  163905 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/21fc8911c3f6c83fd0285acf6ab3747017ef3928ee44099a56fa0480a12512d9/crio/crio-890a6a1b4272911cdc55545256899f521808e24da7202d140953126a12de20f1/freezer.state
	I0819 11:17:39.759214  163905 api_server.go:204] freezer state: "THAWED"
	I0819 11:17:39.759241  163905 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 11:17:39.762942  163905 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 11:17:39.762966  163905 status.go:422] multinode-497926 apiserver status = Running (err=<nil>)
	I0819 11:17:39.762976  163905 status.go:257] multinode-497926 status: &{Name:multinode-497926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:17:39.762997  163905 status.go:255] checking status of multinode-497926-m02 ...
	I0819 11:17:39.763267  163905 cli_runner.go:164] Run: docker container inspect multinode-497926-m02 --format={{.State.Status}}
	I0819 11:17:39.782909  163905 status.go:330] multinode-497926-m02 host status = "Running" (err=<nil>)
	I0819 11:17:39.782937  163905 host.go:66] Checking if "multinode-497926-m02" exists ...
	I0819 11:17:39.783205  163905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-497926-m02
	I0819 11:17:39.799617  163905 host.go:66] Checking if "multinode-497926-m02" exists ...
	I0819 11:17:39.799885  163905 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:17:39.799935  163905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-497926-m02
	I0819 11:17:39.817899  163905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19476-9624/.minikube/machines/multinode-497926-m02/id_rsa Username:docker}
	I0819 11:17:39.902419  163905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:17:39.912958  163905 status.go:257] multinode-497926-m02 status: &{Name:multinode-497926-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:17:39.913009  163905 status.go:255] checking status of multinode-497926-m03 ...
	I0819 11:17:39.913278  163905 cli_runner.go:164] Run: docker container inspect multinode-497926-m03 --format={{.State.Status}}
	I0819 11:17:39.930097  163905 status.go:330] multinode-497926-m03 host status = "Stopped" (err=<nil>)
	I0819 11:17:39.930117  163905 status.go:343] host is not running, skipping remaining checks
	I0819 11:17:39.930123  163905 status.go:257] multinode-497926-m03 status: &{Name:multinode-497926-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-497926 node start m03 -v=7 --alsologtostderr: (8.283682783s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (107.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-497926
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-497926
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-497926: (24.947716054s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-497926 --wait=true -v=8 --alsologtostderr
E0819 11:18:26.527266   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-497926 --wait=true -v=8 --alsologtostderr: (1m22.705436908s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-497926
--- PASS: TestMultiNode/serial/RestartKeepsNodes (107.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-497926 node delete m03: (4.699052294s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-497926 stop: (23.624418622s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-497926 status: exit status 7 (83.126374ms)

                                                
                                                
-- stdout --
	multinode-497926
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-497926-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr: exit status 7 (83.191208ms)

                                                
                                                
-- stdout --
	multinode-497926
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-497926-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:20:05.639816  174005 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:20:05.639917  174005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:05.639928  174005 out.go:358] Setting ErrFile to fd 2...
	I0819 11:20:05.639934  174005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:05.640152  174005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:20:05.640361  174005 out.go:352] Setting JSON to false
	I0819 11:20:05.640396  174005 mustload.go:65] Loading cluster: multinode-497926
	I0819 11:20:05.640505  174005 notify.go:220] Checking for updates...
	I0819 11:20:05.640827  174005 config.go:182] Loaded profile config "multinode-497926": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:20:05.640843  174005 status.go:255] checking status of multinode-497926 ...
	I0819 11:20:05.641223  174005 cli_runner.go:164] Run: docker container inspect multinode-497926 --format={{.State.Status}}
	I0819 11:20:05.659930  174005 status.go:330] multinode-497926 host status = "Stopped" (err=<nil>)
	I0819 11:20:05.659954  174005 status.go:343] host is not running, skipping remaining checks
	I0819 11:20:05.659960  174005 status.go:257] multinode-497926 status: &{Name:multinode-497926 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:20:05.659990  174005 status.go:255] checking status of multinode-497926-m02 ...
	I0819 11:20:05.660255  174005 cli_runner.go:164] Run: docker container inspect multinode-497926-m02 --format={{.State.Status}}
	I0819 11:20:05.678449  174005 status.go:330] multinode-497926-m02 host status = "Stopped" (err=<nil>)
	I0819 11:20:05.678471  174005 status.go:343] host is not running, skipping remaining checks
	I0819 11:20:05.678478  174005 status.go:257] multinode-497926-m02 status: &{Name:multinode-497926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-497926 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-497926 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (45.038646477s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-497926 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-497926
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-497926-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-497926-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.618076ms)

                                                
                                                
-- stdout --
	* [multinode-497926-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-497926-m02' is duplicated with machine name 'multinode-497926-m02' in profile 'multinode-497926'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-497926-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-497926-m03 --driver=docker  --container-runtime=crio: (23.771284866s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-497926
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-497926: exit status 80 (260.481468ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-497926 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-497926-m03 already exists in multinode-497926-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-497926-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-497926-m03: (1.844869353s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.98s)

                                                
                                    
x
+
TestPreload (139.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-534827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 11:21:45.356380   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:22:03.465152   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-534827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m43.618947908s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-534827 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-534827 image pull gcr.io/k8s-minikube/busybox: (2.47137008s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-534827
E0819 11:23:08.425132   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-534827: (5.68459916s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-534827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-534827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.098128543s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-534827 image list
helpers_test.go:175: Cleaning up "test-preload-534827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-534827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-534827: (2.255710402s)
--- PASS: TestPreload (139.34s)

                                                
                                    
x
+
TestScheduledStopUnix (97.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-813311 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-813311 --memory=2048 --driver=docker  --container-runtime=crio: (21.043567422s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813311 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-813311 -n scheduled-stop-813311
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813311 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813311 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813311 -n scheduled-stop-813311
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-813311
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813311 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-813311
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-813311: exit status 7 (64.731838ms)

                                                
                                                
-- stdout --
	scheduled-stop-813311
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813311 -n scheduled-stop-813311
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813311 -n scheduled-stop-813311: exit status 7 (64.152931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-813311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-813311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-813311: (4.693116567s)
--- PASS: TestScheduledStopUnix (97.02s)

                                                
                                    
x
+
TestInsufficientStorage (10.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-602793 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-602793 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.74352267s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95bc3c31-845f-41dd-bf78-6e896a1934a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-602793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fb03c54-0850-4ec3-a7fa-7af89b9daf84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"48ef4bcc-5f1b-4389-b3a7-8ee9434983f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5134458-3b8a-4229-8060-80c223160980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig"}}
	{"specversion":"1.0","id":"9422a4e2-f98f-447f-aa54-72c4b9187fcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube"}}
	{"specversion":"1.0","id":"0f66bdd5-12dd-4271-8508-13cc7f9d2ca2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"96c52d41-3862-4cce-a1d2-420885ab46f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f9b33aa0-1376-460d-8fb1-42e0b870aedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7c3c2cd8-4d55-463f-a3d6-53ea4d8bf07c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0dd4287c-13cd-4228-821f-7763cd833442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"61bb995a-31f3-4d07-ad8c-f422c5f52558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"362cd3d0-86b5-4a0e-ad0c-b2e5b3213fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-602793\" primary control-plane node in \"insufficient-storage-602793\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"be71cef9-3041-42d9-b85c-d29247eaef43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f96b0a3-96ab-4d4e-9c70-98c06ae23e2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3022d0a0-cedc-4477-a2cf-62a5023461ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-602793 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-602793 --output=json --layout=cluster: exit status 7 (252.444066ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-602793","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-602793","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:25:25.475544  196736 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-602793" does not appear in /home/jenkins/minikube-integration/19476-9624/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-602793 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-602793 --output=json --layout=cluster: exit status 7 (250.719843ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-602793","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-602793","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:25:25.726717  196836 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-602793" does not appear in /home/jenkins/minikube-integration/19476-9624/kubeconfig
	E0819 11:25:25.736454  196836 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/insufficient-storage-602793/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-602793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-602793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-602793: (1.815703629s)
--- PASS: TestInsufficientStorage (10.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3033816717 start -p running-upgrade-548758 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0819 11:26:45.356379   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:27:03.460902   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3033816717 start -p running-upgrade-548758 --memory=2200 --vm-driver=docker  --container-runtime=crio: (51.337379441s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-548758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-548758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.724608656s)
helpers_test.go:175: Cleaning up "running-upgrade-548758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-548758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-548758: (2.506325125s)
--- PASS: TestRunningBinaryUpgrade (90.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (340.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.737712769s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-949905
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-949905: (1.253539703s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-949905 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-949905 status --format={{.Host}}: exit status 7 (84.193277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.951482705s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-949905 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (72.739124ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-949905] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-949905
	    minikube start -p kubernetes-upgrade-949905 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9499052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-949905 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949905 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.587450244s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-949905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-949905
E0819 11:32:03.461152   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-949905: (2.243354497s)
--- PASS: TestKubernetesUpgrade (340.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3589504414 start -p missing-upgrade-505169 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3589504414 start -p missing-upgrade-505169 --memory=2200 --driver=docker  --container-runtime=crio: (1m13.539878998s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-505169
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-505169: (19.151439578s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-505169
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-505169 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-505169 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.382924753s)
helpers_test.go:175: Cleaning up "missing-upgrade-505169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-505169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-505169: (1.990384676s)
--- PASS: TestMissingContainerUpgrade (154.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (76.192512ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-027710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027710 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027710 --driver=docker  --container-runtime=crio: (29.024358915s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027710 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1090895507 start -p stopped-upgrade-123822 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1090895507 start -p stopped-upgrade-123822 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m45.376657294s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1090895507 -p stopped-upgrade-123822 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1090895507 -p stopped-upgrade-123822 stop: (2.531506011s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-123822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-123822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.874227771s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --driver=docker  --container-runtime=crio: (6.30363438s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027710 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-027710 status -o json: exit status 2 (444.378101ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-027710","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-027710
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-027710: (2.137026396s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027710 --no-kubernetes --driver=docker  --container-runtime=crio: (8.181694945s)
--- PASS: TestNoKubernetes/serial/Start (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (247.043604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-027710
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-027710: (1.182758468s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027710 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027710 --driver=docker  --container-runtime=crio: (6.958718555s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.792402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-123822
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-123822: (2.189733632s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                    
x
+
TestPause/serial/Start (43.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-638809 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-638809 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.57801046s)
--- PASS: TestPause/serial/Start (43.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-321955 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-321955 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (137.645102ms)

                                                
                                                
-- stdout --
	* [false-321955] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:06.161387  249936 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:06.161516  249936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:06.161526  249936 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:06.161530  249936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:06.161776  249936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-9624/.minikube/bin
	I0819 11:29:06.162430  249936 out.go:352] Setting JSON to false
	I0819 11:29:06.163690  249936 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4286,"bootTime":1724062660,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:29:06.163764  249936 start.go:139] virtualization: kvm guest
	I0819 11:29:06.166065  249936 out.go:177] * [false-321955] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:29:06.167402  249936 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:29:06.167456  249936 notify.go:220] Checking for updates...
	I0819 11:29:06.169524  249936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:06.170559  249936 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-9624/kubeconfig
	I0819 11:29:06.171580  249936 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-9624/.minikube
	I0819 11:29:06.172618  249936 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:29:06.173688  249936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:06.175139  249936 config.go:182] Loaded profile config "cert-expiration-701282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:29:06.175266  249936 config.go:182] Loaded profile config "kubernetes-upgrade-949905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:29:06.175381  249936 config.go:182] Loaded profile config "pause-638809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:29:06.175458  249936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:06.199531  249936 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:29:06.199684  249936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:29:06.248132  249936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-08-19 11:29:06.238753522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:29:06.248237  249936 docker.go:307] overlay module found
	I0819 11:29:06.250082  249936 out.go:177] * Using the docker driver based on user configuration
	I0819 11:29:06.251168  249936 start.go:297] selected driver: docker
	I0819 11:29:06.251190  249936 start.go:901] validating driver "docker" against <nil>
	I0819 11:29:06.251201  249936 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:06.253006  249936 out.go:201] 
	W0819 11:29:06.254045  249936 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 11:29:06.255180  249936 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-321955 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-701282
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:27:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-949905
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: pause-638809
contexts:
- context:
cluster: cert-expiration-701282
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-701282
name: cert-expiration-701282
- context:
cluster: kubernetes-upgrade-949905
user: kubernetes-upgrade-949905
name: kubernetes-upgrade-949905
- context:
cluster: pause-638809
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-638809
name: pause-638809
current-context: pause-638809
kind: Config
preferences: {}
users:
- name: cert-expiration-701282
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.key
- name: kubernetes-upgrade-949905
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.key
- name: pause-638809
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-321955

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321955"

                                                
                                                
----------------------- debugLogs end: false-321955 [took: 2.795583945s] --------------------------------
helpers_test.go:175: Cleaning up "false-321955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-321955
--- PASS: TestNetworkPlugins/group/false (3.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (112.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-345288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-345288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m52.01328223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (112.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-638809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-638809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.990934842s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-638809 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-638809 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-638809 --output=json --layout=cluster: exit status 2 (298.311889ms)

                                                
                                                
-- stdout --
	{"Name":"pause-638809","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-638809","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-638809 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-638809 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-638809 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-638809 --alsologtostderr -v=5: (2.811253616s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-638809
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-638809: exit status 1 (17.975953ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-638809: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-104806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-104806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m20.941357885s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-345288 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [39b6e6ad-8d50-43ea-ae3f-8367c27c7eb8] Pending
helpers_test.go:344: "busybox" [39b6e6ad-8d50-43ea-ae3f-8367c27c7eb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [39b6e6ad-8d50-43ea-ae3f-8367c27c7eb8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003203836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-345288 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-345288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-345288 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-345288 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-345288 --alsologtostderr -v=3: (12.04111697s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-104806 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54165760-5d83-459d-8944-a88c0cc07f8c] Pending
helpers_test.go:344: "busybox" [54165760-5d83-459d-8944-a88c0cc07f8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54165760-5d83-459d-8944-a88c0cc07f8c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005061439s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-104806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-345288 -n old-k8s-version-345288
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-345288 -n old-k8s-version-345288: exit status 7 (91.009729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-345288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (121.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-345288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-345288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m1.344096361s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-345288 -n old-k8s-version-345288
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (121.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-104806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-104806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-104806 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-104806 --alsologtostderr -v=3: (12.111128005s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-279921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 11:31:45.356279   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-279921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (52.264541567s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104806 -n no-preload-104806
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104806 -n no-preload-104806: exit status 7 (90.228102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-104806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (286.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-104806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-104806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m45.805835722s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-104806 -n no-preload-104806
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (286.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-038093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-038093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (28.636302606s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-279921 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b74788c-82f5-4c62-96a3-ccb579acd6b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b74788c-82f5-4c62-96a3-ccb579acd6b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003443623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-279921 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-038093 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-038093 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-038093 --alsologtostderr -v=3: (2.002444219s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-038093 -n newest-cni-038093
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-038093 -n newest-cni-038093: exit status 7 (68.332229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-038093 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-038093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-038093 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.506786888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-038093 -n newest-cni-038093
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-279921 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-279921 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-279921 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-279921 --alsologtostderr -v=3: (11.890513886s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-038093 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-038093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-038093 -n newest-cni-038093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-038093 -n newest-cni-038093: exit status 2 (293.971584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-038093 -n newest-cni-038093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-038093 -n newest-cni-038093: exit status 2 (281.776697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-038093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-038093 -n newest-cni-038093
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-038093 -n newest-cni-038093
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921: exit status 7 (90.119094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-279921 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-279921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-279921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m2.054288264s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-981091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-981091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (43.587518318s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-54qqt" [0336de36-2a4a-4562-a38a-0d63151f7dfb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004092319s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-981091 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19eaf53b-ca2a-4a45-a3df-83ae52bd5cb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [19eaf53b-ca2a-4a45-a3df-83ae52bd5cb1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004156357s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-981091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-54qqt" [0336de36-2a4a-4562-a38a-0d63151f7dfb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003700832s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-345288 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-345288 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-345288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-345288 -n old-k8s-version-345288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-345288 -n old-k8s-version-345288: exit status 2 (288.075997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-345288 -n old-k8s-version-345288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-345288 -n old-k8s-version-345288: exit status 2 (288.549957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-345288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-345288 -n old-k8s-version-345288
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-345288 -n old-k8s-version-345288
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-981091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-981091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-981091 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-981091 --alsologtostderr -v=3: (11.932164102s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.154419906s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981091 -n embed-certs-981091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981091 -n embed-certs-981091: exit status 7 (71.970124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-981091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-981091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-981091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m23.624555299s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981091 -n embed-certs-981091
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t6jfn" [685afdd2-3f13-4c4b-8866-af67301da465] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t6jfn" [685afdd2-3f13-4c4b-8866-af67301da465] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003681215s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0819 11:35:06.529574   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.541655721s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xcqs8" [eb55de98-761c-4e90-aeb1-beed68d600a8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004930608s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-94krp" [82c190e2-db87-499b-83a9-9d0f6761afba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-94krp" [82c190e2-db87-499b-83a9-9d0f6761afba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003301952s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0819 11:36:29.755973   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/old-k8s-version-345288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.846309015s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txf86" [006b0e85-6bc3-4153-bf1e-ae70a60c8f7a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003935636s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txf86" [006b0e85-6bc3-4153-bf1e-ae70a60c8f7a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004117803s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-104806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-104806 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-104806 --alsologtostderr -v=1
E0819 11:36:45.356745   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/functional-675456/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104806 -n no-preload-104806
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104806 -n no-preload-104806: exit status 2 (291.369007ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104806 -n no-preload-104806
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104806 -n no-preload-104806: exit status 2 (293.874224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-104806 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-104806 -n no-preload-104806
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-104806 -n no-preload-104806
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0819 11:37:03.461408   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/addons-454931/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.350356621s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lf5wc" [3a1ac252-464b-4825-8a22-802f2063d626] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004922684s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8z7hf" [ee17c4b8-79ad-4632-b66d-52568b792a86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 11:37:31.199674   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/old-k8s-version-345288/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8z7hf" [ee17c4b8-79ad-4632-b66d-52568b792a86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004128153s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hxdgj" [14f80d2a-30cc-465a-8612-6a035454e9a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hxdgj" [14f80d2a-30cc-465a-8612-6a035454e9a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003374807s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-47wb6" [519b4a02-f583-492a-8667-91ff23bf2ab5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004374579s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.016664342s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-47wb6" [519b4a02-f583-492a-8667-91ff23bf2ab5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004663084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-279921 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-279921 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-279921 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921: exit status 2 (322.312083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921: exit status 2 (323.628635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-279921 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-279921 -n default-k8s-diff-port-279921
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.539626854s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-321955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.339083846s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-227sl" [03b34782-6eea-45e9-8d5a-de744975adf7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003328407s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-227sl" [03b34782-6eea-45e9-8d5a-de744975adf7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004427049s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-981091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-981091 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-981091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981091 -n embed-certs-981091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981091 -n embed-certs-981091: exit status 2 (322.130383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981091 -n embed-certs-981091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981091 -n embed-certs-981091: exit status 2 (394.216302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-981091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981091 -n embed-certs-981091
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981091 -n embed-certs-981091
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zlf4s" [7cddde87-92a0-4fc3-8c30-d39eceafc0be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zlf4s" [7cddde87-92a0-4fc3-8c30-d39eceafc0be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005623459s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jnttf" [92283177-8936-4842-b66f-dd86167031d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004291798s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-46l42" [72ff5181-bc41-4b73-bd04-2d77ba633aab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-46l42" [72ff5181-bc41-4b73-bd04-2d77ba633aab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003515505s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-321955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-321955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-plrwk" [4b05c69b-568c-4fb2-9359-e4bf378e3ca5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-plrwk" [4b05c69b-568c-4fb2-9359-e4bf378e3ca5] Running
E0819 11:39:34.289721   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.296117   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.307539   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.329526   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.372013   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.453525   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.614987   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:34.936755   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:35.578834   16413 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/auto-321955/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005226855s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-321955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-321955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-508924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-508924
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-321955 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-701282
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:27:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-949905
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: pause-638809
contexts:
- context:
cluster: cert-expiration-701282
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-701282
name: cert-expiration-701282
- context:
cluster: kubernetes-upgrade-949905
user: kubernetes-upgrade-949905
name: kubernetes-upgrade-949905
- context:
cluster: pause-638809
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-638809
name: pause-638809
current-context: pause-638809
kind: Config
preferences: {}
users:
- name: cert-expiration-701282
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.key
- name: kubernetes-upgrade-949905
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.key
- name: pause-638809
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-321955

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321955"

                                                
                                                
----------------------- debugLogs end: kubenet-321955 [took: 2.910900424s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-321955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-321955
--- SKIP: TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-321955 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-321955" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-701282
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:27:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-949905
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-9624/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: pause-638809
contexts:
- context:
cluster: cert-expiration-701282
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:28:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-701282
name: cert-expiration-701282
- context:
cluster: kubernetes-upgrade-949905
user: kubernetes-upgrade-949905
name: kubernetes-upgrade-949905
- context:
cluster: pause-638809
extensions:
- extension:
last-update: Mon, 19 Aug 2024 11:29:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-638809
name: pause-638809
current-context: pause-638809
kind: Config
preferences: {}
users:
- name: cert-expiration-701282
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/cert-expiration-701282/client.key
- name: kubernetes-upgrade-949905
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/kubernetes-upgrade-949905/client.key
- name: pause-638809
user:
client-certificate: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.crt
client-key: /home/jenkins/minikube-integration/19476-9624/.minikube/profiles/pause-638809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-321955

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-321955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321955"

                                                
                                                
----------------------- debugLogs end: cilium-321955 [took: 3.095636525s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-321955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-321955
--- SKIP: TestNetworkPlugins/group/cilium (3.25s)

                                                
                                    
Copied to clipboard