Test Report: Docker_Linux_crio 19446

                    
                      68089f2e899ecb1db727fde03c1d4991123fd325:2024-08-14:35784
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 151.8
36 TestAddons/parallel/MetricsServer 302.53
x
+
TestAddons/parallel/Ingress (151.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-146898 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-146898 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-146898 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [af5daa7e-71f1-4dfa-a64f-c3fe2cd160b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [af5daa7e-71f1-4dfa-a64f-c3fe2cd160b3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00343112s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-146898 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.466573111s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-146898 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 addons disable ingress --alsologtostderr -v=1: (7.606057078s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-146898
helpers_test.go:235: (dbg) docker inspect addons-146898:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c",
	        "Created": "2024-08-14T16:10:21.614403661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-14T16:10:21.747404839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a625a3e39975c5bf9755ab525e60a1f8bd16cab9b58877622897d26607806095",
	        "ResolvConfPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/hosts",
	        "LogPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c-json.log",
	        "Name": "/addons-146898",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-146898:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-146898",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322-init/diff:/var/lib/docker/overlay2/d41949e4c516eb21351007b40b547059df55afa65c858079d4bf62d2491589b5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-146898",
	                "Source": "/var/lib/docker/volumes/addons-146898/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-146898",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-146898",
	                "name.minikube.sigs.k8s.io": "addons-146898",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b76a336ebed6afcb0d8509794e0e8e2f1bfbf9e0bf4ba773dfc123eb3abe017",
	            "SandboxKey": "/var/run/docker/netns/5b76a336ebed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-146898": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "58300098e6e99c5cfa782c54a5523432e4763754ed66ecf3c4f594d976665be8",
	                    "EndpointID": "3c8797cd27cce5ca93d7092488c5ef89fd6c894a75004c754096ec7d9706e668",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-146898",
	                        "033665d39c0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-146898 -n addons-146898
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 logs -n 25: (1.086248662s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-996390 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | download-docker-996390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-996390                                                                   | download-docker-996390 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-011666   | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | binary-mirror-011666                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38739                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-011666                                                                     | binary-mirror-011666   | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-146898 --wait=true                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | -p addons-146898                                                                            |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | -p addons-146898                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-146898 ssh cat                                                                       | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | /opt/local-path-provisioner/pvc-b8279e68-d1f4-45e9-8a5a-4efa6552cee5_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:13 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-146898 ip                                                                            | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | addons-146898 addons                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-146898 ssh curl -s                                                                   | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-146898 addons                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-146898 ip                                                                            | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:59
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:59.314800   21995 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:59.314895   21995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:59.314899   21995 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:59.314903   21995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:59.315071   21995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:09:59.315645   21995 out.go:298] Setting JSON to false
	I0814 16:09:59.316461   21995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3143,"bootTime":1723648656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:59.316512   21995 start.go:139] virtualization: kvm guest
	I0814 16:09:59.318739   21995 out.go:177] * [addons-146898] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:09:59.320198   21995 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:09:59.320195   21995 notify.go:220] Checking for updates...
	I0814 16:09:59.323046   21995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:59.324440   21995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:09:59.325670   21995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:09:59.326970   21995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:09:59.328258   21995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:09:59.329837   21995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:59.350965   21995 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:09:59.351094   21995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:59.397696   21995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:59.389210633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:59.397822   21995 docker.go:307] overlay module found
	I0814 16:09:59.399807   21995 out.go:177] * Using the docker driver based on user configuration
	I0814 16:09:59.401466   21995 start.go:297] selected driver: docker
	I0814 16:09:59.401479   21995 start.go:901] validating driver "docker" against <nil>
	I0814 16:09:59.401493   21995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:09:59.402237   21995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:59.445849   21995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:59.437307117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:59.446035   21995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:59.446285   21995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:09:59.448080   21995 out.go:177] * Using Docker driver with root privileges
	I0814 16:09:59.449710   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:09:59.449735   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:09:59.449747   21995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:59.449815   21995 start.go:340] cluster config:
	{Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:59.451289   21995 out.go:177] * Starting "addons-146898" primary control-plane node in "addons-146898" cluster
	I0814 16:09:59.452617   21995 cache.go:121] Beginning downloading kic base image for docker with crio
	I0814 16:09:59.453876   21995 out.go:177] * Pulling base image v0.0.44-1723567951-19429 ...
	I0814 16:09:59.455042   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:59.455066   21995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 16:09:59.455072   21995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:59.455197   21995 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:59.455302   21995 preload.go:172] Found /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:09:59.455324   21995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:09:59.455647   21995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json ...
	I0814 16:09:59.455671   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json: {Name:mka383384adb62e92ac44fa7a4a5b834aec85f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:09:59.470575   21995 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 16:09:59.470675   21995 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 16:09:59.470696   21995 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory, skipping pull
	I0814 16:09:59.470704   21995 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 exists in cache, skipping pull
	I0814 16:09:59.470711   21995 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	I0814 16:09:59.470717   21995 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from local cache
	I0814 16:10:11.599154   21995 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from cached tarball
	I0814 16:10:11.599200   21995 cache.go:194] Successfully downloaded all kic artifacts
	I0814 16:10:11.599229   21995 start.go:360] acquireMachinesLock for addons-146898: {Name:mk6fb8e1c94b5fd8a8fbd9c1b18b8acac474bc30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:11.599325   21995 start.go:364] duration metric: took 77.642µs to acquireMachinesLock for "addons-146898"
	I0814 16:10:11.599346   21995 start.go:93] Provisioning new machine with config: &{Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:11.599424   21995 start.go:125] createHost starting for "" (driver="docker")
	I0814 16:10:11.601229   21995 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0814 16:10:11.601461   21995 start.go:159] libmachine.API.Create for "addons-146898" (driver="docker")
	I0814 16:10:11.601496   21995 client.go:168] LocalClient.Create starting
	I0814 16:10:11.601602   21995 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem
	I0814 16:10:11.763532   21995 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem
	I0814 16:10:11.964158   21995 cli_runner.go:164] Run: docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 16:10:11.979557   21995 cli_runner.go:211] docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 16:10:11.979625   21995 network_create.go:284] running [docker network inspect addons-146898] to gather additional debugging logs...
	I0814 16:10:11.979642   21995 cli_runner.go:164] Run: docker network inspect addons-146898
	W0814 16:10:11.994695   21995 cli_runner.go:211] docker network inspect addons-146898 returned with exit code 1
	I0814 16:10:11.994728   21995 network_create.go:287] error running [docker network inspect addons-146898]: docker network inspect addons-146898: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-146898 not found
	I0814 16:10:11.994745   21995 network_create.go:289] output of [docker network inspect addons-146898]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-146898 not found
	
	** /stderr **
	I0814 16:10:11.994860   21995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 16:10:12.010588   21995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f4790}
	I0814 16:10:12.010639   21995 network_create.go:124] attempt to create docker network addons-146898 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 16:10:12.010699   21995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-146898 addons-146898
	I0814 16:10:12.070239   21995 network_create.go:108] docker network addons-146898 192.168.49.0/24 created
	I0814 16:10:12.070269   21995 kic.go:121] calculated static IP "192.168.49.2" for the "addons-146898" container
	I0814 16:10:12.070316   21995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0814 16:10:12.085419   21995 cli_runner.go:164] Run: docker volume create addons-146898 --label name.minikube.sigs.k8s.io=addons-146898 --label created_by.minikube.sigs.k8s.io=true
	I0814 16:10:12.102037   21995 oci.go:103] Successfully created a docker volume addons-146898
	I0814 16:10:12.102127   21995 cli_runner.go:164] Run: docker run --rm --name addons-146898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --entrypoint /usr/bin/test -v addons-146898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib
	I0814 16:10:17.100088   21995 cli_runner.go:217] Completed: docker run --rm --name addons-146898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --entrypoint /usr/bin/test -v addons-146898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib: (4.997925179s)
	I0814 16:10:17.100115   21995 oci.go:107] Successfully prepared a docker volume addons-146898
	I0814 16:10:17.100130   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:17.100149   21995 kic.go:194] Starting extracting preloaded images to volume ...
	I0814 16:10:17.100198   21995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-146898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 16:10:21.551307   21995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-146898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir: (4.451074031s)
	I0814 16:10:21.551336   21995 kic.go:203] duration metric: took 4.45118457s to extract preloaded images to volume ...
	W0814 16:10:21.551465   21995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0814 16:10:21.551574   21995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 16:10:21.599983   21995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-146898 --name addons-146898 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-146898 --network addons-146898 --ip 192.168.49.2 --volume addons-146898:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083
	I0814 16:10:21.906833   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Running}}
	I0814 16:10:21.924643   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:21.942254   21995 cli_runner.go:164] Run: docker exec addons-146898 stat /var/lib/dpkg/alternatives/iptables
	I0814 16:10:21.982856   21995 oci.go:144] the created container "addons-146898" has a running status.
	I0814 16:10:21.982886   21995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa...
	I0814 16:10:22.151232   21995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 16:10:22.171650   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:22.191411   21995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 16:10:22.191433   21995 kic_runner.go:114] Args: [docker exec --privileged addons-146898 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 16:10:22.252701   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:22.272149   21995 machine.go:94] provisionDockerMachine start ...
	I0814 16:10:22.272256   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.288748   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.288957   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.288971   21995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 16:10:22.504203   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-146898
	
	I0814 16:10:22.504227   21995 ubuntu.go:169] provisioning hostname "addons-146898"
	I0814 16:10:22.504272   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.521315   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.521566   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.521592   21995 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-146898 && echo "addons-146898" | sudo tee /etc/hostname
	I0814 16:10:22.659715   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-146898
	
	I0814 16:10:22.659794   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.676059   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.676266   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.676291   21995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-146898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-146898/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-146898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:10:22.800787   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:22.800816   21995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13813/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13813/.minikube}
	I0814 16:10:22.800854   21995 ubuntu.go:177] setting up certificates
	I0814 16:10:22.800867   21995 provision.go:84] configureAuth start
	I0814 16:10:22.800921   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:22.816772   21995 provision.go:143] copyHostCerts
	I0814 16:10:22.816848   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/key.pem (1679 bytes)
	I0814 16:10:22.816978   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/ca.pem (1078 bytes)
	I0814 16:10:22.817083   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/cert.pem (1123 bytes)
	I0814 16:10:22.817169   21995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem org=jenkins.addons-146898 san=[127.0.0.1 192.168.49.2 addons-146898 localhost minikube]
	I0814 16:10:22.902600   21995 provision.go:177] copyRemoteCerts
	I0814 16:10:22.902663   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:10:22.902704   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.918646   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.009130   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:10:23.030136   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:10:23.050062   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:10:23.070575   21995 provision.go:87] duration metric: took 269.69399ms to configureAuth
	I0814 16:10:23.070611   21995 ubuntu.go:193] setting minikube options for container-runtime
	I0814 16:10:23.070780   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:23.070887   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.087360   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:23.087539   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:23.087563   21995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:10:23.297175   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:10:23.297204   21995 machine.go:97] duration metric: took 1.025009823s to provisionDockerMachine
	I0814 16:10:23.297217   21995 client.go:171] duration metric: took 11.695713856s to LocalClient.Create
	I0814 16:10:23.297238   21995 start.go:167] duration metric: took 11.695777559s to libmachine.API.Create "addons-146898"
	I0814 16:10:23.297250   21995 start.go:293] postStartSetup for "addons-146898" (driver="docker")
	I0814 16:10:23.297264   21995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:10:23.297320   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:10:23.297365   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.313668   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.409471   21995 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:10:23.412584   21995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 16:10:23.412654   21995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 16:10:23.412667   21995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 16:10:23.412675   21995 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0814 16:10:23.412685   21995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13813/.minikube/addons for local assets ...
	I0814 16:10:23.412744   21995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13813/.minikube/files for local assets ...
	I0814 16:10:23.412766   21995 start.go:296] duration metric: took 115.510043ms for postStartSetup
	I0814 16:10:23.413052   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:23.430533   21995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json ...
	I0814 16:10:23.430768   21995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:10:23.430811   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.447602   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.533366   21995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0814 16:10:23.537135   21995 start.go:128] duration metric: took 11.937698863s to createHost
	I0814 16:10:23.537153   21995 start.go:83] releasing machines lock for "addons-146898", held for 11.93781786s
	I0814 16:10:23.537201   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:23.552354   21995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:10:23.552426   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.552362   21995 ssh_runner.go:195] Run: cat /version.json
	I0814 16:10:23.552547   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.572255   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.572608   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.736498   21995 ssh_runner.go:195] Run: systemctl --version
	I0814 16:10:23.740633   21995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:10:23.877823   21995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 16:10:23.882233   21995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:23.899571   21995 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0814 16:10:23.899658   21995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:23.925228   21995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0814 16:10:23.925272   21995 start.go:495] detecting cgroup driver to use...
	I0814 16:10:23.925309   21995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0814 16:10:23.925371   21995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:10:23.939525   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:10:23.949687   21995 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:10:23.949739   21995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:10:23.962691   21995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:10:23.975985   21995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:10:24.054908   21995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:10:24.130206   21995 docker.go:233] disabling docker service ...
	I0814 16:10:24.130278   21995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:10:24.147251   21995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:10:24.157614   21995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:10:24.228902   21995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:10:24.311000   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:10:24.321749   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:10:24.336358   21995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:10:24.336431   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.345715   21995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:10:24.345780   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.354949   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.363979   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.373561   21995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:10:24.382756   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.391605   21995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.406447   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.415919   21995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:10:24.423559   21995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:10:24.431437   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:24.501627   21995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:10:24.591905   21995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:10:24.591988   21995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:10:24.595358   21995 start.go:563] Will wait 60s for crictl version
	I0814 16:10:24.595423   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:10:24.598438   21995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:10:24.631363   21995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0814 16:10:24.631523   21995 ssh_runner.go:195] Run: crio --version
	I0814 16:10:24.666001   21995 ssh_runner.go:195] Run: crio --version
	I0814 16:10:24.701237   21995 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0814 16:10:24.702465   21995 cli_runner.go:164] Run: docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 16:10:24.718411   21995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 16:10:24.721759   21995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:24.731577   21995 kubeadm.go:883] updating cluster {Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:10:24.731687   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:24.731736   21995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:24.794777   21995 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:24.794798   21995 crio.go:433] Images already preloaded, skipping extraction
	I0814 16:10:24.794839   21995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:24.826224   21995 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:24.826244   21995 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:10:24.826252   21995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0814 16:10:24.826338   21995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-146898 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:10:24.826398   21995 ssh_runner.go:195] Run: crio config
	I0814 16:10:24.867740   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:10:24.867765   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:10:24.867777   21995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:10:24.867805   21995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-146898 NodeName:addons-146898 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:10:24.867964   21995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-146898"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:10:24.868024   21995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:10:24.876179   21995 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:10:24.876239   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 16:10:24.883999   21995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0814 16:10:24.899859   21995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:10:24.916572   21995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0814 16:10:24.932469   21995 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 16:10:24.935660   21995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:24.945699   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:25.022840   21995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:25.035349   21995 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898 for IP: 192.168.49.2
	I0814 16:10:25.035377   21995 certs.go:194] generating shared ca certs ...
	I0814 16:10:25.035400   21995 certs.go:226] acquiring lock for ca certs: {Name:mk1285ad10e917a8c21c37d6bbfc6630b395fe15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.035524   21995 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key
	I0814 16:10:25.145375   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt ...
	I0814 16:10:25.145402   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt: {Name:mk56de38a5a6a065840a53302703be75913b7540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.145560   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key ...
	I0814 16:10:25.145570   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key: {Name:mkb10292095ca52c2c9f762c536853026e7bd0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.145649   21995 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key
	I0814 16:10:25.372318   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt ...
	I0814 16:10:25.372352   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt: {Name:mk0c143209c2ba1cec1748241e70c4402f002142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.372523   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key ...
	I0814 16:10:25.372535   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key: {Name:mk1f10b962886ecc72031545289125c894d8b027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.372677   21995 certs.go:256] generating profile certs ...
	I0814 16:10:25.372729   21995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key
	I0814 16:10:25.372751   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt with IP's: []
	I0814 16:10:25.846336   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt ...
	I0814 16:10:25.846366   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: {Name:mkbab85371741d02d75da73bad152a83d2c5d78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.846529   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key ...
	I0814 16:10:25.846539   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key: {Name:mkd682f4af1dbea838f8a0ca34c27f4648750679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.846611   21995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49
	I0814 16:10:25.846630   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0814 16:10:26.031631   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 ...
	I0814 16:10:26.031659   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49: {Name:mk29493867ecc85fd95db1c4f44fb6995940b598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.031811   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49 ...
	I0814 16:10:26.031825   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49: {Name:mk12fb7d0295d04ac0c5b1f91ae368bae36df922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.031892   21995 certs.go:381] copying /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 -> /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt
	I0814 16:10:26.031978   21995 certs.go:385] copying /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49 -> /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key
	I0814 16:10:26.032029   21995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key
	I0814 16:10:26.032046   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt with IP's: []
	I0814 16:10:26.502174   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt ...
	I0814 16:10:26.502210   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt: {Name:mkf78e1ebb0c27c1a090b66eac20e6a91bb44b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.502390   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key ...
	I0814 16:10:26.502400   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key: {Name:mk0562feedc116c51ceaaa271ce12c328e2b3fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.502566   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:10:26.502599   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:10:26.502624   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:10:26.502646   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem (1679 bytes)
	I0814 16:10:26.503229   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:10:26.525140   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:10:26.545698   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:10:26.567345   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:10:26.589135   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 16:10:26.609794   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:10:26.630516   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:10:26.653458   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:10:26.675308   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:10:26.696489   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:10:26.711702   21995 ssh_runner.go:195] Run: openssl version
	I0814 16:10:26.716703   21995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:10:26.724803   21995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.727905   21995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.728000   21995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.734007   21995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:10:26.742190   21995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:10:26.745293   21995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:10:26.745338   21995 kubeadm.go:392] StartCluster: {Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:26.745412   21995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:10:26.745460   21995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:10:26.778708   21995 cri.go:89] found id: ""
	I0814 16:10:26.778771   21995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:10:26.786818   21995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:10:26.794455   21995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0814 16:10:26.794503   21995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:10:26.802178   21995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:10:26.802193   21995 kubeadm.go:157] found existing configuration files:
	
	I0814 16:10:26.802232   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:10:26.809644   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:10:26.809701   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:10:26.816827   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:10:26.824261   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:10:26.824314   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:10:26.831763   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:10:26.839741   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:10:26.839789   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:10:26.847303   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:10:26.855171   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:10:26.855231   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:10:26.862683   21995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 16:10:26.897687   21995 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:10:26.897748   21995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:10:26.913810   21995 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0814 16:10:26.913876   21995 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0814 16:10:26.913905   21995 kubeadm.go:310] OS: Linux
	I0814 16:10:26.914007   21995 kubeadm.go:310] CGROUPS_CPU: enabled
	I0814 16:10:26.914107   21995 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0814 16:10:26.914186   21995 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0814 16:10:26.914266   21995 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0814 16:10:26.914346   21995 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0814 16:10:26.914420   21995 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0814 16:10:26.914493   21995 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0814 16:10:26.914559   21995 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0814 16:10:26.914621   21995 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0814 16:10:26.963124   21995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:10:26.963267   21995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:10:26.963398   21995 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:10:26.969795   21995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:10:26.972364   21995 out.go:204]   - Generating certificates and keys ...
	I0814 16:10:26.972461   21995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:10:26.972541   21995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:10:27.170427   21995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:10:27.496497   21995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:10:27.711288   21995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:10:27.827672   21995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:10:27.966921   21995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:10:27.967035   21995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-146898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 16:10:28.044011   21995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:10:28.044149   21995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-146898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 16:10:28.127325   21995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:10:28.198683   21995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:10:28.395148   21995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:10:28.395214   21995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:10:28.524084   21995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:10:28.817223   21995 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:10:29.156724   21995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:10:29.431848   21995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:10:29.641353   21995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:10:29.642621   21995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:10:29.645247   21995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:10:29.647223   21995 out.go:204]   - Booting up control plane ...
	I0814 16:10:29.647356   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:10:29.647463   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:10:29.648330   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:10:29.659996   21995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:10:29.665342   21995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:10:29.665404   21995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:10:29.738614   21995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:10:29.738723   21995 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:10:30.240106   21995 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.600369ms
	I0814 16:10:30.240216   21995 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:10:34.741915   21995 kubeadm.go:310] [api-check] The API server is healthy after 4.501781554s
	I0814 16:10:34.751954   21995 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:10:34.763579   21995 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:10:34.780232   21995 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:10:34.780457   21995 kubeadm.go:310] [mark-control-plane] Marking the node addons-146898 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:10:34.786848   21995 kubeadm.go:310] [bootstrap-token] Using token: rjop2a.3xxdxyu2rw5j4mls
	I0814 16:10:34.788398   21995 out.go:204]   - Configuring RBAC rules ...
	I0814 16:10:34.788533   21995 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:10:34.791209   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:10:34.795776   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:10:34.798697   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:10:34.800870   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:10:34.802890   21995 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:10:35.147551   21995 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:10:35.565924   21995 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:10:36.148537   21995 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:10:36.149554   21995 kubeadm.go:310] 
	I0814 16:10:36.149651   21995 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:10:36.149673   21995 kubeadm.go:310] 
	I0814 16:10:36.149752   21995 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:10:36.149767   21995 kubeadm.go:310] 
	I0814 16:10:36.149827   21995 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:10:36.149908   21995 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:10:36.149958   21995 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:10:36.149987   21995 kubeadm.go:310] 
	I0814 16:10:36.150115   21995 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:10:36.150148   21995 kubeadm.go:310] 
	I0814 16:10:36.150211   21995 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:10:36.150223   21995 kubeadm.go:310] 
	I0814 16:10:36.150294   21995 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:10:36.150400   21995 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:10:36.150505   21995 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:10:36.150521   21995 kubeadm.go:310] 
	I0814 16:10:36.150634   21995 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:10:36.150751   21995 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:10:36.150763   21995 kubeadm.go:310] 
	I0814 16:10:36.150868   21995 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rjop2a.3xxdxyu2rw5j4mls \
	I0814 16:10:36.151000   21995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e78517872b4f8b632b00f802290dddbf43139dde7a5a320b299f5698ab99227 \
	I0814 16:10:36.151029   21995 kubeadm.go:310] 	--control-plane 
	I0814 16:10:36.151035   21995 kubeadm.go:310] 
	I0814 16:10:36.151132   21995 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:10:36.151141   21995 kubeadm.go:310] 
	I0814 16:10:36.151228   21995 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rjop2a.3xxdxyu2rw5j4mls \
	I0814 16:10:36.151328   21995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e78517872b4f8b632b00f802290dddbf43139dde7a5a320b299f5698ab99227 
	I0814 16:10:36.153189   21995 kubeadm.go:310] W0814 16:10:26.895177    1299 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:36.153463   21995 kubeadm.go:310] W0814 16:10:26.895768    1299 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:36.153647   21995 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0814 16:10:36.153744   21995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:10:36.153764   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:10:36.153774   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:10:36.156438   21995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 16:10:36.157796   21995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 16:10:36.161471   21995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 16:10:36.161491   21995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 16:10:36.177769   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 16:10:36.370500   21995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:10:36.370588   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:36.370588   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-146898 minikube.k8s.io/updated_at=2024_08_14T16_10_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=addons-146898 minikube.k8s.io/primary=true
	I0814 16:10:36.377437   21995 ops.go:34] apiserver oom_adj: -16
	I0814 16:10:36.457868   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:36.958471   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:37.458216   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:37.958777   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:38.458773   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:38.958312   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:39.458290   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:39.957984   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:40.458060   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:40.544372   21995 kubeadm.go:1113] duration metric: took 4.173855902s to wait for elevateKubeSystemPrivileges
	I0814 16:10:40.544407   21995 kubeadm.go:394] duration metric: took 13.799071853s to StartCluster
	I0814 16:10:40.544424   21995 settings.go:142] acquiring lock: {Name:mka72e833cc56b9ba293232cfc25e94fae8a2ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:40.544534   21995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:10:40.545051   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/kubeconfig: {Name:mkf1cd97562485c31d14c03886c1adfb8630debe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:40.545274   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:10:40.545304   21995 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:40.545359   21995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0814 16:10:40.545460   21995 addons.go:69] Setting yakd=true in profile "addons-146898"
	I0814 16:10:40.545475   21995 addons.go:69] Setting default-storageclass=true in profile "addons-146898"
	I0814 16:10:40.545487   21995 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-146898"
	I0814 16:10:40.545497   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:40.545499   21995 addons.go:69] Setting cloud-spanner=true in profile "addons-146898"
	I0814 16:10:40.545510   21995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-146898"
	I0814 16:10:40.545517   21995 addons.go:69] Setting storage-provisioner=true in profile "addons-146898"
	I0814 16:10:40.545534   21995 addons.go:234] Setting addon storage-provisioner=true in "addons-146898"
	I0814 16:10:40.545534   21995 addons.go:234] Setting addon cloud-spanner=true in "addons-146898"
	I0814 16:10:40.545532   21995 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-146898"
	I0814 16:10:40.545544   21995 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-146898"
	I0814 16:10:40.545549   21995 addons.go:69] Setting volcano=true in profile "addons-146898"
	I0814 16:10:40.545560   21995 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-146898"
	I0814 16:10:40.545565   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545567   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545567   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545575   21995 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-146898"
	I0814 16:10:40.545597   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545601   21995 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-146898"
	I0814 16:10:40.545571   21995 addons.go:234] Setting addon volcano=true in "addons-146898"
	I0814 16:10:40.545674   21995 addons.go:69] Setting ingress-dns=true in profile "addons-146898"
	I0814 16:10:40.545689   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545694   21995 addons.go:234] Setting addon ingress-dns=true in "addons-146898"
	I0814 16:10:40.545724   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545882   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545897   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546052   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546054   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546089   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546102   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546200   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546244   21995 addons.go:69] Setting inspektor-gadget=true in profile "addons-146898"
	I0814 16:10:40.546272   21995 addons.go:234] Setting addon inspektor-gadget=true in "addons-146898"
	I0814 16:10:40.546302   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.546738   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545508   21995 addons.go:69] Setting helm-tiller=true in profile "addons-146898"
	I0814 16:10:40.549774   21995 addons.go:234] Setting addon helm-tiller=true in "addons-146898"
	I0814 16:10:40.549818   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.550357   21995 out.go:177] * Verifying Kubernetes components...
	I0814 16:10:40.550494   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.551913   21995 addons.go:69] Setting ingress=true in profile "addons-146898"
	I0814 16:10:40.551957   21995 addons.go:234] Setting addon ingress=true in "addons-146898"
	I0814 16:10:40.552020   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.552554   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.553523   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:40.545503   21995 addons.go:234] Setting addon yakd=true in "addons-146898"
	I0814 16:10:40.553636   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.554055   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.554547   21995 addons.go:69] Setting metrics-server=true in profile "addons-146898"
	I0814 16:10:40.554604   21995 addons.go:234] Setting addon metrics-server=true in "addons-146898"
	I0814 16:10:40.554639   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.555791   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545465   21995 addons.go:69] Setting gcp-auth=true in profile "addons-146898"
	I0814 16:10:40.555878   21995 addons.go:69] Setting volumesnapshots=true in profile "addons-146898"
	I0814 16:10:40.556148   21995 addons.go:234] Setting addon volumesnapshots=true in "addons-146898"
	I0814 16:10:40.556212   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.556311   21995 mustload.go:65] Loading cluster: addons-146898
	I0814 16:10:40.556498   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:40.556695   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.555883   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545513   21995 addons.go:69] Setting registry=true in profile "addons-146898"
	I0814 16:10:40.557599   21995 addons.go:234] Setting addon registry=true in "addons-146898"
	I0814 16:10:40.557677   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.558169   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.589150   21995 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0814 16:10:40.589497   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.589500   21995 addons.go:234] Setting addon default-storageclass=true in "addons-146898"
	I0814 16:10:40.590837   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.591345   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.591379   21995 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0814 16:10:40.591396   21995 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0814 16:10:40.591453   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	W0814 16:10:40.595734   21995 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0814 16:10:40.596348   21995 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0814 16:10:40.599689   21995 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:40.599707   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0814 16:10:40.599758   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.601378   21995 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-146898"
	I0814 16:10:40.601426   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.601878   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.616154   21995 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0814 16:10:40.617836   21995 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:40.617863   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0814 16:10:40.617926   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.620193   21995 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0814 16:10:40.620236   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0814 16:10:40.621496   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0814 16:10:40.621516   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0814 16:10:40.621558   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0814 16:10:40.621573   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.624320   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0814 16:10:40.624380   21995 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0814 16:10:40.627557   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0814 16:10:40.627580   21995 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0814 16:10:40.627639   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.627807   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0814 16:10:40.629460   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0814 16:10:40.630748   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0814 16:10:40.632007   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0814 16:10:40.633304   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0814 16:10:40.637123   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0814 16:10:40.637154   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0814 16:10:40.637220   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.645908   21995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:10:40.647241   21995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:40.647261   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:10:40.647320   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.651578   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0814 16:10:40.652673   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0814 16:10:40.652692   21995 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0814 16:10:40.652748   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.657199   21995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:40.657219   21995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:10:40.657272   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.676770   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0814 16:10:40.677120   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.679455   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:40.681021   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:40.682451   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.682672   21995 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:40.682683   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0814 16:10:40.682730   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.685980   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.689084   21995 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0814 16:10:40.690578   21995 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0814 16:10:40.692551   21995 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:40.692570   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0814 16:10:40.692626   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.692790   21995 out.go:177]   - Using image docker.io/busybox:stable
	I0814 16:10:40.694709   21995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:40.694724   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0814 16:10:40.694779   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.694789   21995 out.go:177]   - Using image docker.io/registry:2.8.3
	I0814 16:10:40.695074   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.698120   21995 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0814 16:10:40.698161   21995 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0814 16:10:40.699236   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 16:10:40.699260   21995 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 16:10:40.699314   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.703650   21995 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0814 16:10:40.703672   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0814 16:10:40.703786   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.703777   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.704206   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.716710   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.718953   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.728219   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.728941   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.729401   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:10:40.739106   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.739459   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.739633   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	W0814 16:10:40.745486   21995 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0814 16:10:40.745517   21995 retry.go:31] will retry after 270.544191ms: ssh: handshake failed: EOF
	I0814 16:10:40.750685   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.758727   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.946132   21995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:41.129133   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0814 16:10:41.129208   21995 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0814 16:10:41.144261   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0814 16:10:41.144358   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0814 16:10:41.145741   21995 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0814 16:10:41.145805   21995 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0814 16:10:41.225823   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:41.228239   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:41.235116   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:41.329774   21995 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0814 16:10:41.329807   21995 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0814 16:10:41.336223   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 16:10:41.336308   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0814 16:10:41.336621   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0814 16:10:41.336677   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0814 16:10:41.337202   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:41.339463   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:41.341737   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:41.341790   21995 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0814 16:10:41.349555   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0814 16:10:41.349586   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0814 16:10:41.425610   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0814 16:10:41.425708   21995 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0814 16:10:41.449965   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:41.532066   21995 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0814 16:10:41.532094   21995 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0814 16:10:41.538040   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:41.541457   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0814 16:10:41.541485   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0814 16:10:41.545573   21995 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0814 16:10:41.545601   21995 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0814 16:10:41.629713   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:41.634874   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 16:10:41.634901   21995 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 16:10:41.646193   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0814 16:10:41.646225   21995 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0814 16:10:41.733066   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0814 16:10:41.733149   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0814 16:10:41.745724   21995 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:41.745752   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0814 16:10:41.825571   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0814 16:10:41.825686   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0814 16:10:41.827933   21995 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0814 16:10:41.828007   21995 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0814 16:10:41.828335   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:41.828393   21995 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 16:10:41.926954   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:42.027581   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:42.036924   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0814 16:10:42.036953   21995 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0814 16:10:42.049604   21995 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.320170581s)
	I0814 16:10:42.049639   21995 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0814 16:10:42.050200   21995 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.104029969s)
	I0814 16:10:42.051180   21995 node_ready.go:35] waiting up to 6m0s for node "addons-146898" to be "Ready" ...
	I0814 16:10:42.139705   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0814 16:10:42.139797   21995 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0814 16:10:42.147371   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0814 16:10:42.147444   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0814 16:10:42.331860   21995 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0814 16:10:42.331885   21995 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0814 16:10:42.433208   21995 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:42.433284   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0814 16:10:42.627989   21995 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0814 16:10:42.628026   21995 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0814 16:10:42.737435   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0814 16:10:42.737468   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0814 16:10:42.828269   21995 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-146898" context rescaled to 1 replicas
	I0814 16:10:42.832098   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:42.832182   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0814 16:10:42.935314   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0814 16:10:42.935398   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0814 16:10:42.945944   21995 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:42.945969   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0814 16:10:42.947644   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:43.141650   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:43.343986   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:43.438125   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0814 16:10:43.438196   21995 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0814 16:10:43.734540   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0814 16:10:43.734639   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0814 16:10:43.846925   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0814 16:10:43.846952   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0814 16:10:44.031139   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:44.031174   21995 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0814 16:10:44.139151   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.913012983s)
	I0814 16:10:44.143959   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:44.152122   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:46.633256   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:46.649437   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.421153504s)
	I0814 16:10:46.649647   21995 addons.go:475] Verifying addon ingress=true in "addons-146898"
	I0814 16:10:46.649674   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.31241243s)
	I0814 16:10:46.649772   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.310171316s)
	I0814 16:10:46.649835   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.199845879s)
	I0814 16:10:46.649867   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.111800479s)
	I0814 16:10:46.649914   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.020115931s)
	I0814 16:10:46.649956   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.722923905s)
	I0814 16:10:46.649967   21995 addons.go:475] Verifying addon registry=true in "addons-146898"
	I0814 16:10:46.650086   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.622412894s)
	I0814 16:10:46.650104   21995 addons.go:475] Verifying addon metrics-server=true in "addons-146898"
	I0814 16:10:46.650149   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.702476698s)
	I0814 16:10:46.649609   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.414424906s)
	I0814 16:10:46.651156   21995 out.go:177] * Verifying ingress addon...
	I0814 16:10:46.652195   21995 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-146898 service yakd-dashboard -n yakd-dashboard
	
	I0814 16:10:46.652230   21995 out.go:177] * Verifying registry addon...
	I0814 16:10:46.654027   21995 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0814 16:10:46.728036   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0814 16:10:46.740919   21995 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0814 16:10:46.740947   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0814 16:10:46.741416   21995 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0814 16:10:46.830398   21995 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:46.830422   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.158757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:47.231716   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.648951   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.507251997s)
	W0814 16:10:47.649002   21995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:47.649050   21995 retry.go:31] will retry after 289.159835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:47.649046   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.304996419s)
	I0814 16:10:47.659160   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:47.828016   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.883745   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0814 16:10:47.883812   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:47.900142   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:47.939257   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:48.147337   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0814 16:10:48.227194   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:48.230807   21995 addons.go:234] Setting addon gcp-auth=true in "addons-146898"
	I0814 16:10:48.230863   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:48.231401   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:48.258896   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:48.262891   21995 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0814 16:10:48.262965   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:48.280399   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:48.361339   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.209164402s)
	I0814 16:10:48.361375   21995 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-146898"
	I0814 16:10:48.363003   21995 out.go:177] * Verifying csi-hostpath-driver addon...
	I0814 16:10:48.365336   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0814 16:10:48.429511   21995 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:10:48.429539   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:48.658025   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:48.730891   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:48.868843   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:49.054299   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:49.158521   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:49.231170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:49.368445   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:49.657471   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:49.731412   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:49.868931   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:50.158239   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:50.231311   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:50.428222   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:50.731752   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:50.732028   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:50.868538   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:51.158359   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:51.185534   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.246222096s)
	I0814 16:10:51.185584   21995 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.922656116s)
	I0814 16:10:51.187466   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:51.188824   21995 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0814 16:10:51.190029   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0814 16:10:51.190043   21995 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0814 16:10:51.231979   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:51.239251   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0814 16:10:51.239274   21995 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0814 16:10:51.257557   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:10:51.257580   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0814 16:10:51.274358   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:10:51.369469   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:51.554666   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:51.662560   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:51.731034   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:51.856122   21995 addons.go:475] Verifying addon gcp-auth=true in "addons-146898"
	I0814 16:10:51.858207   21995 out.go:177] * Verifying gcp-auth addon...
	I0814 16:10:51.860477   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0814 16:10:51.926709   21995 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 16:10:51.926735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:51.927545   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:52.158441   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:52.231082   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:52.364071   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:52.368701   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:52.658246   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:52.731518   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:52.864396   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:52.868585   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:53.158575   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:53.231421   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:53.364211   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:53.368185   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:53.554690   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:53.659757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:53.731622   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:53.864375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:53.868520   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:54.158366   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:54.231512   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:54.364409   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:54.368544   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:54.657857   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:54.730917   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:54.863169   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:54.868274   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:55.158313   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:55.258447   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:55.363996   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:55.368317   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:55.554957   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:55.657780   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:55.730770   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:55.863977   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:55.868076   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:56.157510   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:56.231344   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:56.363761   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:56.367801   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:56.657157   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:56.731046   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:56.863605   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:56.867659   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:57.157842   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:57.230745   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:57.363160   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:57.368018   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:57.657440   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:57.731660   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:57.864486   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:57.868579   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:58.053705   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:58.157394   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:58.231713   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:58.363264   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:58.368434   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:58.657854   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:58.731135   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:58.863772   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:58.867956   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:59.227884   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.232700   21995 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:59.232726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.365427   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:59.368823   21995 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:10:59.368848   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:59.554873   21995 node_ready.go:49] node "addons-146898" has status "Ready":"True"
	I0814 16:10:59.554903   21995 node_ready.go:38] duration metric: took 17.503694802s for node "addons-146898" to be "Ready" ...
	I0814 16:10:59.554916   21995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:10:59.563449   21995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.657795   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.732827   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.864807   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:59.869274   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:00.159862   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.259085   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.363298   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:00.369124   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:00.657915   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.731170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.863191   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:00.869375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.068436   21995 pod_ready.go:92] pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.068459   21995 pod_ready.go:81] duration metric: took 1.504984661s for pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.068479   21995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.072459   21995 pod_ready.go:92] pod "etcd-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.072479   21995 pod_ready.go:81] duration metric: took 3.994113ms for pod "etcd-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.072492   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.076356   21995 pod_ready.go:92] pod "kube-apiserver-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.076377   21995 pod_ready.go:81] duration metric: took 3.877145ms for pod "kube-apiserver-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.076386   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.079842   21995 pod_ready.go:92] pod "kube-controller-manager-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.079859   21995 pod_ready.go:81] duration metric: took 3.466937ms for pod "kube-controller-manager-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.079870   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g8sfq" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.155544   21995 pod_ready.go:92] pod "kube-proxy-g8sfq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.155564   21995 pod_ready.go:81] duration metric: took 75.687206ms for pod "kube-proxy-g8sfq" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.155574   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.158363   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.231683   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.364029   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:01.369284   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.555850   21995 pod_ready.go:92] pod "kube-scheduler-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.555876   21995 pod_ready.go:81] duration metric: took 400.294997ms for pod "kube-scheduler-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.555890   21995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.657968   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.731490   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.863458   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:01.869568   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.157489   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.231576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.363732   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:02.370310   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.658871   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.731923   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.864007   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:02.869111   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.159431   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.231776   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.364662   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.370373   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.561277   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:03.658989   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.731982   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.864085   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.869488   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.158634   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.232273   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.364293   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.370043   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.657909   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.731968   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.864325   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.869350   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.157906   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.232265   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.363762   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.368899   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.562241   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:05.658832   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.760185   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.863516   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.869619   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.158553   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.232237   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.364933   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.369303   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.658211   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.731738   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.864180   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.869973   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.158636   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.232114   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.363804   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.370980   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.658256   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.759240   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.863908   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.868846   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.060823   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:08.157949   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.231037   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.363773   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.368750   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.658557   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.731803   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.864420   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.869347   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.157878   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.231975   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.363813   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.368870   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.658080   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.731664   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.864360   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.869216   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.061292   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:10.157795   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.231860   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.364769   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.370429   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.658505   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.731843   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.864135   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.868980   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.157877   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.232239   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.363484   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.369754   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.658782   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.731984   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.863505   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.869527   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.061712   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:12.158462   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.232103   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.363694   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.369301   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.657359   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.731938   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.864053   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.869069   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.158274   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.231385   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.363833   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.368652   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.658965   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.732675   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.928344   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.929349   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.126590   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:14.158685   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.231950   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.364126   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.369687   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.658882   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.732056   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.864375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.869717   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.157974   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.231152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.363779   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.369426   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.658604   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.731601   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.864275   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.869916   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.157679   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.231859   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.363782   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.370004   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.560949   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:16.659138   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.731985   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.863521   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.869166   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.158240   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.231820   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.364241   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.369194   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.657588   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.731631   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.864242   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.869352   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.157779   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.231865   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.363735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.368596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.562919   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:18.658248   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.731383   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.863796   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.868999   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.158946   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.232270   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.429857   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.430878   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.658630   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.732391   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.863576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.927995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.158571   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.231888   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.364707   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.369435   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.657916   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.731517   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.864068   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.869267   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.062039   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:21.159582   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.231847   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.363970   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.369682   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.658699   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.731852   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.864149   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.868867   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.158408   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.231814   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.364321   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.369417   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.658561   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.732735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.864300   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.869975   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.158230   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.231803   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.365186   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.369746   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.562198   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:23.658741   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.732191   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.863747   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.869822   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.158114   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.231429   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.364625   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.467157   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.657941   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.731151   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.863505   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.869527   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.158138   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.231699   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.364226   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.369059   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.657919   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.732029   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.863834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.868927   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.061262   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:26.158375   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.240365   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.363469   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.370020   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.659377   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.731833   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.864102   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.870509   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.158711   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.232659   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.364320   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.370886   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.658343   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.731632   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.866590   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.869865   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.062263   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:28.158194   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.231561   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.364217   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.369689   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.658417   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.731957   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.863687   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.870528   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.158173   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.231362   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.364379   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.369743   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.658576   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.731834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.864805   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.868903   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.158135   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.231229   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.364190   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.369760   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.561708   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:30.658670   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.758654   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.864170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.869490   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.157733   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.232152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.363974   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.369198   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.659625   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.731963   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.864598   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.869838   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.162139   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.258720   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.364181   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.368861   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.658569   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.731893   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.863255   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.869166   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.061121   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:33.158482   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.231666   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.364217   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.369090   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.658449   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.731596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.863904   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.868970   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.157895   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.231405   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.364256   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.368867   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.658571   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.731952   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.863424   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.869475   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.061693   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:35.158561   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.231855   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.364596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.369871   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.658265   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.759417   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.863903   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.868834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.157859   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.231709   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.364201   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.369091   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.659032   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.731925   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.864678   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.869592   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.061914   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:37.157757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.231700   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.364050   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.369076   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.657835   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.732104   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.863650   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.869538   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.157996   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.231238   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.363772   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.368570   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.658151   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.732565   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.863230   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.870209   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.062900   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:39.158864   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.231696   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.363676   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.370152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.658416   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.738684   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.864253   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.869435   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.158438   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.231577   21995 kapi.go:107] duration metric: took 53.503544276s to wait for kubernetes.io/minikube-addons=registry ...
	I0814 16:11:40.364253   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.369210   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.658705   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.863759   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.868607   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.157843   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.363928   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.369405   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.561931   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:41.658193   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.863307   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.869331   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.158645   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.363683   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.369828   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.657899   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.864069   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.869369   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.158091   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.364071   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.369278   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.657502   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.863576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.869567   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.062187   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:44.158474   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.363971   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.369336   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.657905   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.863636   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.869904   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.158501   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.363278   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.369272   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.657815   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.864606   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.869732   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.062490   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:46.159094   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.364209   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.369955   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.658303   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.930395   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.932197   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.228711   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.430956   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.431402   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.729259   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.929814   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.930780   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.131798   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:48.158092   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.364154   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.369810   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.659018   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.864237   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.869569   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.158780   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.363430   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.369408   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.657933   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.864837   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.869386   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.158702   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.363745   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.368830   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.560605   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:50.658178   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.863828   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.870373   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.158341   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.364353   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.369935   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.658971   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.864088   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.869271   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.158502   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.364239   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.369934   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.561616   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:52.659192   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.864751   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.868995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.158702   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.364438   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.370229   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.657885   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.930158   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.931942   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.230244   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.431387   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.432243   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.631850   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:54.727617   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.927654   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.931590   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.232368   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.435361   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.435995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.829821   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.931945   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.933446   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.227455   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.364075   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.369724   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.657601   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.863617   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.870377   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.062005   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:57.158386   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.364180   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.369331   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.659507   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.863414   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.870035   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.159122   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.364208   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.368957   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.659764   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.864292   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.869723   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.158474   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.364773   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.370214   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.561684   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:59.657774   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.864193   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.869463   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.158144   21995 kapi.go:107] duration metric: took 1m13.504113712s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0814 16:12:00.364267   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.369550   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.937679   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.938089   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.363753   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.368783   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.864087   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.869454   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.061883   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:02.363726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.369958   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.864635   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.870336   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.363876   21995 kapi.go:107] duration metric: took 1m11.503396478s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0814 16:12:03.365838   21995 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-146898 cluster.
	I0814 16:12:03.367443   21995 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0814 16:12:03.368832   21995 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0814 16:12:03.369911   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.870445   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.130584   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:04.369821   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.871232   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.369866   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.870726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.370058   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.562084   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:06.870391   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.369927   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.870724   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.369961   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.870310   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.061528   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:09.369721   21995 kapi.go:107] duration metric: took 1m21.004381388s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0814 16:12:09.371436   21995 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, helm-tiller, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0814 16:12:09.372686   21995 addons.go:510] duration metric: took 1m28.827325195s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns helm-tiller metrics-server nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0814 16:12:10.062585   21995 pod_ready.go:92] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:10.062607   21995 pod_ready.go:81] duration metric: took 1m8.50670987s for pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.062619   21995 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.067106   21995 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:10.067132   21995 pod_ready.go:81] duration metric: took 4.506211ms for pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.067162   21995 pod_ready.go:38] duration metric: took 1m10.512207597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:12:10.067188   21995 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:12:10.067220   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:10.067279   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:10.102031   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:10.102052   21995 cri.go:89] found id: ""
	I0814 16:12:10.102062   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:10.102114   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.105678   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:10.105743   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:10.139594   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:10.139621   21995 cri.go:89] found id: ""
	I0814 16:12:10.139631   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:10.139674   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.143005   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:10.143066   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:10.176153   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:10.176177   21995 cri.go:89] found id: ""
	I0814 16:12:10.176186   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:10.176227   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.179415   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:10.179488   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:10.214084   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:10.214139   21995 cri.go:89] found id: ""
	I0814 16:12:10.214147   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:10.214197   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.217498   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:10.217556   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:10.250777   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:10.250801   21995 cri.go:89] found id: ""
	I0814 16:12:10.250811   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:10.250860   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.254103   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:10.254150   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:10.287276   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:10.287294   21995 cri.go:89] found id: ""
	I0814 16:12:10.287301   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:10.287344   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.290548   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:10.290602   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:10.323421   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:10.323439   21995 cri.go:89] found id: ""
	I0814 16:12:10.323446   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:10.323494   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.326712   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:10.326737   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:10.399388   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:10.399424   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:10.439413   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:10.439450   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:10.471795   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:10.471823   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:10.509712   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:10.509742   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:10.588275   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:10.588310   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:10.629453   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:10.629482   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:10.641113   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:10.641139   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:10.737594   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:10.737623   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:10.782964   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:10.782996   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:10.826176   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:10.826212   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:10.885809   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:10.885844   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.440692   21995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:12:13.454238   21995 api_server.go:72] duration metric: took 1m32.908901224s to wait for apiserver process to appear ...
	I0814 16:12:13.454260   21995 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:12:13.454292   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:13.454330   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:13.486558   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:13.486581   21995 cri.go:89] found id: ""
	I0814 16:12:13.486591   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:13.486642   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.489914   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:13.489971   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:13.521892   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:13.521911   21995 cri.go:89] found id: ""
	I0814 16:12:13.521919   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:13.521960   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.525249   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:13.525299   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:13.557095   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:13.557116   21995 cri.go:89] found id: ""
	I0814 16:12:13.557123   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:13.557163   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.560338   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:13.560394   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:13.593724   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:13.593744   21995 cri.go:89] found id: ""
	I0814 16:12:13.593753   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:13.593803   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.596997   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:13.597081   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:13.629533   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:13.629557   21995 cri.go:89] found id: ""
	I0814 16:12:13.629566   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:13.629607   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.632773   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:13.632832   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:13.665627   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.665648   21995 cri.go:89] found id: ""
	I0814 16:12:13.665655   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:13.665698   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.669023   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:13.669102   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:13.702014   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:13.702038   21995 cri.go:89] found id: ""
	I0814 16:12:13.702047   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:13.702101   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.705328   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:13.705349   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:13.717108   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:13.717140   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:13.760034   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:13.760064   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:13.802392   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:13.802421   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:13.860725   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:13.860762   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:13.901341   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:13.901371   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.954318   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:13.954347   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:13.992414   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:13.992446   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:14.067680   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:14.067712   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:14.108330   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:14.108367   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:14.182895   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:14.182931   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:14.279220   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:14.279250   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:16.812490   21995 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 16:12:16.816079   21995 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 16:12:16.816866   21995 api_server.go:141] control plane version: v1.31.0
	I0814 16:12:16.816885   21995 api_server.go:131] duration metric: took 3.362619343s to wait for apiserver health ...
	I0814 16:12:16.816892   21995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:12:16.816917   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:16.816964   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:16.849723   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:16.849745   21995 cri.go:89] found id: ""
	I0814 16:12:16.849755   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:16.849812   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.853014   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:16.853106   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:16.885299   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:16.885325   21995 cri.go:89] found id: ""
	I0814 16:12:16.885335   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:16.885397   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.888561   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:16.888635   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:16.921191   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:16.921209   21995 cri.go:89] found id: ""
	I0814 16:12:16.921216   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:16.921253   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.924673   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:16.924747   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:16.958966   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:16.958983   21995 cri.go:89] found id: ""
	I0814 16:12:16.958990   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:16.959036   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.962336   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:16.962441   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:16.995206   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:16.995235   21995 cri.go:89] found id: ""
	I0814 16:12:16.995246   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:16.995293   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.998777   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:16.998836   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:17.032374   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:17.032404   21995 cri.go:89] found id: ""
	I0814 16:12:17.032414   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:17.032469   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:17.035699   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:17.035749   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:17.067892   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:17.067917   21995 cri.go:89] found id: ""
	I0814 16:12:17.067925   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:17.067967   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:17.071178   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:17.071207   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:17.082929   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:17.082975   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:17.125272   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:17.125304   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:17.206438   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:17.206485   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:17.250267   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:17.250302   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:17.309858   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:17.309900   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:17.348314   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:17.348346   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:17.381056   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:17.381088   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:17.434174   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:17.434212   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:17.473752   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:17.473783   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:17.549302   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:17.549338   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:17.647991   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:17.648018   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:20.198191   21995 system_pods.go:59] 19 kube-system pods found
	I0814 16:12:20.198231   21995 system_pods.go:61] "coredns-6f6b679f8f-rs8rx" [e1ff80e6-35f1-43e8-a10b-de57a706a45d] Running
	I0814 16:12:20.198236   21995 system_pods.go:61] "csi-hostpath-attacher-0" [eb22957e-40d4-46c5-ab19-a5f80dc49fe2] Running
	I0814 16:12:20.198239   21995 system_pods.go:61] "csi-hostpath-resizer-0" [b0b249da-1106-4481-a727-5d3dd4e9309e] Running
	I0814 16:12:20.198243   21995 system_pods.go:61] "csi-hostpathplugin-59ftp" [d8f46820-47d0-4d6a-882c-807b5a5b4203] Running
	I0814 16:12:20.198246   21995 system_pods.go:61] "etcd-addons-146898" [7a0c1724-2052-4a2e-842c-be916d45c6e8] Running
	I0814 16:12:20.198249   21995 system_pods.go:61] "kindnet-8q79t" [3f144cfd-ff50-4c02-a99d-01486262a254] Running
	I0814 16:12:20.198254   21995 system_pods.go:61] "kube-apiserver-addons-146898" [5192ebef-081d-44af-8efb-fe9694c28323] Running
	I0814 16:12:20.198257   21995 system_pods.go:61] "kube-controller-manager-addons-146898" [3e7df712-ed9d-4b18-b1b5-f73fda29bc48] Running
	I0814 16:12:20.198261   21995 system_pods.go:61] "kube-ingress-dns-minikube" [c9f18577-09a8-4168-a9ce-4c3dacaff132] Running
	I0814 16:12:20.198264   21995 system_pods.go:61] "kube-proxy-g8sfq" [cabf99db-c672-46bb-bb8e-f912b2e34db9] Running
	I0814 16:12:20.198267   21995 system_pods.go:61] "kube-scheduler-addons-146898" [51dea6b6-bb73-401d-8a0f-beb9adbfc01f] Running
	I0814 16:12:20.198270   21995 system_pods.go:61] "metrics-server-8988944d9-79d8t" [a144a102-aafb-4752-9784-1bdb16857bcd] Running
	I0814 16:12:20.198273   21995 system_pods.go:61] "nvidia-device-plugin-daemonset-c58zx" [203e32d0-800d-4b0e-acc3-caf43f35078e] Running
	I0814 16:12:20.198277   21995 system_pods.go:61] "registry-6fb4cdfc84-gwcbq" [6f24e44c-5e4f-4ef3-b21c-9950979c1e64] Running
	I0814 16:12:20.198282   21995 system_pods.go:61] "registry-proxy-dbmdb" [e307ed1d-1881-4d95-8ec9-361298af6c49] Running
	I0814 16:12:20.198288   21995 system_pods.go:61] "snapshot-controller-56fcc65765-47lvb" [432b350c-a8c3-4ac2-9061-b9c66e439297] Running
	I0814 16:12:20.198291   21995 system_pods.go:61] "snapshot-controller-56fcc65765-vfr28" [263c2d7c-3af6-41e8-97c4-7b3bcb707158] Running
	I0814 16:12:20.198298   21995 system_pods.go:61] "storage-provisioner" [07f9bb9e-3e12-4e4d-843a-a0e06de9d402] Running
	I0814 16:12:20.198301   21995 system_pods.go:61] "tiller-deploy-b48cc5f79-57b8n" [ab2aaa5f-4152-4d49-8a92-7653708c9955] Running
	I0814 16:12:20.198309   21995 system_pods.go:74] duration metric: took 3.381410419s to wait for pod list to return data ...
	I0814 16:12:20.198323   21995 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:12:20.200954   21995 default_sa.go:45] found service account: "default"
	I0814 16:12:20.200978   21995 default_sa.go:55] duration metric: took 2.648661ms for default service account to be created ...
	I0814 16:12:20.200987   21995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:12:20.208648   21995 system_pods.go:86] 19 kube-system pods found
	I0814 16:12:20.208677   21995 system_pods.go:89] "coredns-6f6b679f8f-rs8rx" [e1ff80e6-35f1-43e8-a10b-de57a706a45d] Running
	I0814 16:12:20.208683   21995 system_pods.go:89] "csi-hostpath-attacher-0" [eb22957e-40d4-46c5-ab19-a5f80dc49fe2] Running
	I0814 16:12:20.208687   21995 system_pods.go:89] "csi-hostpath-resizer-0" [b0b249da-1106-4481-a727-5d3dd4e9309e] Running
	I0814 16:12:20.208692   21995 system_pods.go:89] "csi-hostpathplugin-59ftp" [d8f46820-47d0-4d6a-882c-807b5a5b4203] Running
	I0814 16:12:20.208696   21995 system_pods.go:89] "etcd-addons-146898" [7a0c1724-2052-4a2e-842c-be916d45c6e8] Running
	I0814 16:12:20.208699   21995 system_pods.go:89] "kindnet-8q79t" [3f144cfd-ff50-4c02-a99d-01486262a254] Running
	I0814 16:12:20.208703   21995 system_pods.go:89] "kube-apiserver-addons-146898" [5192ebef-081d-44af-8efb-fe9694c28323] Running
	I0814 16:12:20.208708   21995 system_pods.go:89] "kube-controller-manager-addons-146898" [3e7df712-ed9d-4b18-b1b5-f73fda29bc48] Running
	I0814 16:12:20.208712   21995 system_pods.go:89] "kube-ingress-dns-minikube" [c9f18577-09a8-4168-a9ce-4c3dacaff132] Running
	I0814 16:12:20.208716   21995 system_pods.go:89] "kube-proxy-g8sfq" [cabf99db-c672-46bb-bb8e-f912b2e34db9] Running
	I0814 16:12:20.208720   21995 system_pods.go:89] "kube-scheduler-addons-146898" [51dea6b6-bb73-401d-8a0f-beb9adbfc01f] Running
	I0814 16:12:20.208726   21995 system_pods.go:89] "metrics-server-8988944d9-79d8t" [a144a102-aafb-4752-9784-1bdb16857bcd] Running
	I0814 16:12:20.208734   21995 system_pods.go:89] "nvidia-device-plugin-daemonset-c58zx" [203e32d0-800d-4b0e-acc3-caf43f35078e] Running
	I0814 16:12:20.208740   21995 system_pods.go:89] "registry-6fb4cdfc84-gwcbq" [6f24e44c-5e4f-4ef3-b21c-9950979c1e64] Running
	I0814 16:12:20.208747   21995 system_pods.go:89] "registry-proxy-dbmdb" [e307ed1d-1881-4d95-8ec9-361298af6c49] Running
	I0814 16:12:20.208751   21995 system_pods.go:89] "snapshot-controller-56fcc65765-47lvb" [432b350c-a8c3-4ac2-9061-b9c66e439297] Running
	I0814 16:12:20.208754   21995 system_pods.go:89] "snapshot-controller-56fcc65765-vfr28" [263c2d7c-3af6-41e8-97c4-7b3bcb707158] Running
	I0814 16:12:20.208758   21995 system_pods.go:89] "storage-provisioner" [07f9bb9e-3e12-4e4d-843a-a0e06de9d402] Running
	I0814 16:12:20.208762   21995 system_pods.go:89] "tiller-deploy-b48cc5f79-57b8n" [ab2aaa5f-4152-4d49-8a92-7653708c9955] Running
	I0814 16:12:20.208771   21995 system_pods.go:126] duration metric: took 7.778254ms to wait for k8s-apps to be running ...
	I0814 16:12:20.208779   21995 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:12:20.208822   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:12:20.219952   21995 system_svc.go:56] duration metric: took 11.164526ms WaitForService to wait for kubelet
	I0814 16:12:20.219978   21995 kubeadm.go:582] duration metric: took 1m39.674643236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:12:20.220003   21995 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:12:20.223057   21995 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0814 16:12:20.223081   21995 node_conditions.go:123] node cpu capacity is 8
	I0814 16:12:20.223094   21995 node_conditions.go:105] duration metric: took 3.08597ms to run NodePressure ...
	I0814 16:12:20.223105   21995 start.go:241] waiting for startup goroutines ...
	I0814 16:12:20.223111   21995 start.go:246] waiting for cluster config update ...
	I0814 16:12:20.223126   21995 start.go:255] writing updated cluster config ...
	I0814 16:12:20.227252   21995 ssh_runner.go:195] Run: rm -f paused
	I0814 16:12:20.275717   21995 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:12:20.361195   21995 out.go:177] * Done! kubectl is now configured to use "addons-146898" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.216657706Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=66baaa2f-382a-48c8-ad9a-f7c78492567c name=/runtime.v1.ImageService/ImageStatus
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.217353411Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=66baaa2f-382a-48c8-ad9a-f7c78492567c name=/runtime.v1.ImageService/ImageStatus
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.217940305Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=35ad7df5-7948-4d86-b165-bf42ab148c02 name=/runtime.v1.ImageService/ImageStatus
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.218457191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=35ad7df5-7948-4d86-b165-bf42ab148c02 name=/runtime.v1.ImageService/ImageStatus
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.219115766Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-5q8tm/hello-world-app" id=3d4859d2-b72d-486c-bd52-8bfa6fd03c9c name=/runtime.v1.RuntimeService/CreateContainer
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.219212993Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.231901346Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a616d3c803916e1c99da2ea4fe4e524ad2a2f0e374f6b97c7ed20f12f9e127b0/merged/etc/passwd: no such file or directory"
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.231952399Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a616d3c803916e1c99da2ea4fe4e524ad2a2f0e374f6b97c7ed20f12f9e127b0/merged/etc/group: no such file or directory"
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.264207601Z" level=info msg="Created container ecbb3524a95e82b531b59efb8913a099ae5e8680837ddcbb584032b5a365872d: default/hello-world-app-55bf9c44b4-5q8tm/hello-world-app" id=3d4859d2-b72d-486c-bd52-8bfa6fd03c9c name=/runtime.v1.RuntimeService/CreateContainer
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.264850183Z" level=info msg="Starting container: ecbb3524a95e82b531b59efb8913a099ae5e8680837ddcbb584032b5a365872d" id=16390de5-3a18-4d06-bf35-1ca6ab0a2d98 name=/runtime.v1.RuntimeService/StartContainer
	Aug 14 16:15:47 addons-146898 crio[1023]: time="2024-08-14 16:15:47.270715945Z" level=info msg="Started container" PID=11679 containerID=ecbb3524a95e82b531b59efb8913a099ae5e8680837ddcbb584032b5a365872d description=default/hello-world-app-55bf9c44b4-5q8tm/hello-world-app id=16390de5-3a18-4d06-bf35-1ca6ab0a2d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fd78e94f0adfff4a3401c53c93f319178b85c086be528e6d6186f9c668dedd
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.343198413Z" level=warning msg="Stopping container df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=60780df8-f1e2-41c7-81b0-11a6465cb8eb name=/runtime.v1.RuntimeService/StopContainer
	Aug 14 16:15:48 addons-146898 conmon[5884]: conmon df6fb3b0abe807a05271 <ninfo>: container 5896 exited with status 137
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.473835097Z" level=info msg="Stopped container df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4: ingress-nginx/ingress-nginx-controller-7559cbf597-kkjnl/controller" id=60780df8-f1e2-41c7-81b0-11a6465cb8eb name=/runtime.v1.RuntimeService/StopContainer
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.474340287Z" level=info msg="Stopping pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=c174f24c-9674-4dfc-8473-11d93acdc7f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.477243744Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-OX5ONQWGPMBCCC2U - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-QYZ7GVYMXMAGZZ4O - [0:0]\n-X KUBE-HP-QYZ7GVYMXMAGZZ4O\n-X KUBE-HP-OX5ONQWGPMBCCC2U\nCOMMIT\n"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.478533337Z" level=info msg="Closing host port tcp:80"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.478568252Z" level=info msg="Closing host port tcp:443"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.479865281Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.479885256Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.480021667Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7559cbf597-kkjnl Namespace:ingress-nginx ID:c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568 UID:01ca932c-f8e0-4731-b346-eb32978fcfaa NetNS:/var/run/netns/88b104dd-9148-4d98-9fd2-b78a95af77fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.480151247Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-kkjnl from CNI network \"kindnet\" (type=ptp)"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.522501438Z" level=info msg="Stopped pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=c174f24c-9674-4dfc-8473-11d93acdc7f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.822927612Z" level=info msg="Removing container: df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4" id=065b729f-1721-4cbf-a4f3-8ffcc9aa8efb name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.835658431Z" level=info msg="Removed container df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4: ingress-nginx/ingress-nginx-controller-7559cbf597-kkjnl/controller" id=065b729f-1721-4cbf-a4f3-8ffcc9aa8efb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecbb3524a95e8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   67fd78e94f0ad       hello-world-app-55bf9c44b4-5q8tm
	581fb67114c20       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   edc5700a88fe4       nginx
	44643b6c6e90a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   1d2ce8cea40dd       busybox
	49f0e02bb01d2       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     2                   6a98d27efb2e9       ingress-nginx-admission-patch-9gws6
	804517e458597       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   aa61e86c79550       ingress-nginx-admission-create-md956
	76e1492c01d8d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   a87704ad3890d       metrics-server-8988944d9-79d8t
	246a06bec9775       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   39be2e6da715d       coredns-6f6b679f8f-rs8rx
	b6661ad7ea490       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   3181cb7f40b62       storage-provisioner
	8ad4d9ab5f75c       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           5 minutes ago       Running             kindnet-cni               0                   6fa39e51d9eb8       kindnet-8q79t
	adf58724b3153       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   bcccd8f0036f4       kube-proxy-g8sfq
	191364a2b9cfc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   3242f826af48b       kube-apiserver-addons-146898
	dfeacce667a35       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   736a843486681       etcd-addons-146898
	3527b98c06c04       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   92939ee714511       kube-controller-manager-addons-146898
	5ee1b1bb3dede       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   a97758e9f9904       kube-scheduler-addons-146898
	
	
	==> coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] <==
	[INFO] 10.244.0.2:48085 - 13206 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084193s
	[INFO] 10.244.0.2:35630 - 64345 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003697225s
	[INFO] 10.244.0.2:35630 - 43099 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00466186s
	[INFO] 10.244.0.2:45579 - 51002 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003848609s
	[INFO] 10.244.0.2:45579 - 20030 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003894453s
	[INFO] 10.244.0.2:37185 - 37776 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004016615s
	[INFO] 10.244.0.2:37185 - 23955 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004471266s
	[INFO] 10.244.0.2:42414 - 64049 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005434s
	[INFO] 10.244.0.2:42414 - 52021 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054625s
	[INFO] 10.244.0.21:43408 - 4470 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195774s
	[INFO] 10.244.0.21:38240 - 22188 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000292614s
	[INFO] 10.244.0.21:38014 - 55372 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015071s
	[INFO] 10.244.0.21:56407 - 15661 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096472s
	[INFO] 10.244.0.21:35096 - 13074 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067927s
	[INFO] 10.244.0.21:43337 - 46833 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128737s
	[INFO] 10.244.0.21:33752 - 10618 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007014459s
	[INFO] 10.244.0.21:41997 - 29691 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007564708s
	[INFO] 10.244.0.21:42835 - 32345 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00501955s
	[INFO] 10.244.0.21:40780 - 34365 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006231354s
	[INFO] 10.244.0.21:48421 - 50846 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004729351s
	[INFO] 10.244.0.21:35403 - 36661 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004800195s
	[INFO] 10.244.0.21:44808 - 58688 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000766863s
	[INFO] 10.244.0.21:56746 - 5068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000928719s
	[INFO] 10.244.0.25:51993 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022113s
	[INFO] 10.244.0.25:45872 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118603s
	
	
	==> describe nodes <==
	Name:               addons-146898
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-146898
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=addons-146898
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_10_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-146898
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:10:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-146898
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:15:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:13:39 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:13:39 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:13:39 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:13:39 +0000   Wed, 14 Aug 2024 16:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-146898
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 379be374f1c148e28523fa9e7f5e33ce
	  System UUID:                1a425e32-2dd3-4f11-8284-3396b217a9b8
	  Boot ID:                    01947443-31df-48f7-8446-7d38dbb2c026
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  default                     hello-world-app-55bf9c44b4-5q8tm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-6f6b679f8f-rs8rx                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m13s
	  kube-system                 etcd-addons-146898                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m19s
	  kube-system                 kindnet-8q79t                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m13s
	  kube-system                 kube-apiserver-addons-146898             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-addons-146898    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-proxy-g8sfq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-scheduler-addons-146898             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 metrics-server-8988944d9-79d8t           100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m8s                   kube-proxy       
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m23s (x8 over 5m24s)  kubelet          Node addons-146898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m23s (x8 over 5m24s)  kubelet          Node addons-146898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m23s (x7 over 5m24s)  kubelet          Node addons-146898 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node addons-146898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-146898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node addons-146898 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m14s                  node-controller  Node addons-146898 event: Registered Node addons-146898 in Controller
	  Normal   NodeReady                4m54s                  kubelet          Node addons-146898 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000618] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.592364] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044662] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006688] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.012233] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003175] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015066] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.190736] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 16:13] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +1.011855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +2.015831] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +4.063648] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +8.191338] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[Aug14 16:14] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[ +33.277375] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000036] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	
	
	==> etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] <==
	{"level":"warn","ts":"2024-08-14T16:10:43.638299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.283437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-14T16:10:43.638391Z","caller":"traceutil/trace.go:171","msg":"trace[385103287] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:422; }","duration":"105.384175ms","start":"2024-08-14T16:10:43.532996Z","end":"2024-08-14T16:10:43.638380Z","steps":["trace[385103287] 'agreement among raft nodes before linearized reading'  (duration: 105.253588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:43.638611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.515054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:10:43.638674Z","caller":"traceutil/trace.go:171","msg":"trace[183833095] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:422; }","duration":"105.581567ms","start":"2024-08-14T16:10:43.533084Z","end":"2024-08-14T16:10:43.638665Z","steps":["trace[183833095] 'agreement among raft nodes before linearized reading'  (duration: 105.499536ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:43.639426Z","caller":"traceutil/trace.go:171","msg":"trace[606312538] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"106.586291ms","start":"2024-08-14T16:10:43.532830Z","end":"2024-08-14T16:10:43.639416Z","steps":["trace[606312538] 'process raft request'  (duration: 102.954708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.638667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.830239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031207388838662 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/registry-proxy-5787bf5f6d\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/registry-proxy-5787bf5f6d\" value_size:2702 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T16:10:44.642717Z","caller":"traceutil/trace.go:171","msg":"trace[1593354808] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"107.466381ms","start":"2024-08-14T16:10:44.535237Z","end":"2024-08-14T16:10:44.642703Z","steps":["trace[1593354808] 'compare'  (duration: 100.712981ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:44.642906Z","caller":"traceutil/trace.go:171","msg":"trace[1493422766] linearizableReadLoop","detail":"{readStateIndex:497; appliedIndex:496; }","duration":"106.738369ms","start":"2024-08-14T16:10:44.536157Z","end":"2024-08-14T16:10:44.642896Z","steps":["trace[1493422766] 'read index received'  (duration: 926.843µs)","trace[1493422766] 'applied index is now lower than readState.Index'  (duration: 105.810593ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:10:44.643157Z","caller":"traceutil/trace.go:171","msg":"trace[1511698523] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"100.458441ms","start":"2024-08-14T16:10:44.542690Z","end":"2024-08-14T16:10:44.643148Z","steps":["trace[1511698523] 'process raft request'  (duration: 98.98844ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.643391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.864182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2024-08-14T16:10:44.643451Z","caller":"traceutil/trace.go:171","msg":"trace[2034288172] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:488; }","duration":"107.932541ms","start":"2024-08-14T16:10:44.535509Z","end":"2024-08-14T16:10:44.643442Z","steps":["trace[2034288172] 'agreement among raft nodes before linearized reading'  (duration: 107.840083ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:44.741762Z","caller":"traceutil/trace.go:171","msg":"trace[495505027] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"108.103234ms","start":"2024-08-14T16:10:44.633637Z","end":"2024-08-14T16:10:44.741740Z","steps":["trace[495505027] 'process raft request'  (duration: 105.566058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.742294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.708523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:10:44.743840Z","caller":"traceutil/trace.go:171","msg":"trace[1703353517] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:0; response_revision:490; }","duration":"116.257376ms","start":"2024-08-14T16:10:44.627564Z","end":"2024-08-14T16:10:44.743821Z","steps":["trace[1703353517] 'agreement among raft nodes before linearized reading'  (duration: 114.651401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.744115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.67652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-08-14T16:10:44.747716Z","caller":"traceutil/trace.go:171","msg":"trace[675380562] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:490; }","duration":"112.294597ms","start":"2024-08-14T16:10:44.635411Z","end":"2024-08-14T16:10:44.747706Z","steps":["trace[675380562] 'agreement among raft nodes before linearized reading'  (duration: 108.645368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.747665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.287281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-14T16:10:44.747824Z","caller":"traceutil/trace.go:171","msg":"trace[328019672] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:497; }","duration":"200.452122ms","start":"2024-08-14T16:10:44.547365Z","end":"2024-08-14T16:10:44.747817Z","steps":["trace[328019672] 'agreement among raft nodes before linearized reading'  (duration: 200.268025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.827499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.342884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-146898\" ","response":"range_response_count:1 size:5648"}
	{"level":"info","ts":"2024-08-14T16:10:44.827579Z","caller":"traceutil/trace.go:171","msg":"trace[2027603397] range","detail":"{range_begin:/registry/minions/addons-146898; range_end:; response_count:1; response_revision:497; }","duration":"182.435642ms","start":"2024-08-14T16:10:44.645129Z","end":"2024-08-14T16:10:44.827565Z","steps":["trace[2027603397] 'agreement among raft nodes before linearized reading'  (duration: 182.129973ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:31.671712Z","caller":"traceutil/trace.go:171","msg":"trace[1003861272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1331; }","duration":"107.457063ms","start":"2024-08-14T16:12:31.564234Z","end":"2024-08-14T16:12:31.671691Z","steps":["trace[1003861272] 'process raft request'  (duration: 56.420267ms)","trace[1003861272] 'compare'  (duration: 50.928112ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:12:31.870536Z","caller":"traceutil/trace.go:171","msg":"trace[514024397] transaction","detail":"{read_only:false; response_revision:1333; number_of_response:1; }","duration":"190.240255ms","start":"2024-08-14T16:12:31.680274Z","end":"2024-08-14T16:12:31.870514Z","steps":["trace[514024397] 'process raft request'  (duration: 189.595525ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:31.870497Z","caller":"traceutil/trace.go:171","msg":"trace[1431956324] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"129.371359ms","start":"2024-08-14T16:12:31.741109Z","end":"2024-08-14T16:12:31.870480Z","steps":["trace[1431956324] 'read index received'  (duration: 128.757359ms)","trace[1431956324] 'applied index is now lower than readState.Index'  (duration: 613.288µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:12:31.870657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.52788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:12:31.870692Z","caller":"traceutil/trace.go:171","msg":"trace[1688816681] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:1333; }","duration":"129.583563ms","start":"2024-08-14T16:12:31.741101Z","end":"2024-08-14T16:12:31.870684Z","steps":["trace[1688816681] 'agreement among raft nodes before linearized reading'  (duration: 129.498148ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:15:53 up 58 min,  0 users,  load average: 0.08, 0.37, 0.22
	Linux addons-146898 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] <==
	I0814 16:14:39.026674       1 main.go:299] handling current node
	W0814 16:14:48.079631       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0814 16:14:48.079672       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0814 16:14:49.025794       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:14:49.026191       1 main.go:299] handling current node
	I0814 16:14:59.026041       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:14:59.026084       1 main.go:299] handling current node
	I0814 16:15:09.026563       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:15:09.026601       1 main.go:299] handling current node
	W0814 16:15:18.016760       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:15:18.016808       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0814 16:15:19.025930       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:15:19.025970       1 main.go:299] handling current node
	W0814 16:15:20.133016       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:15:20.133074       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0814 16:15:28.930596       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0814 16:15:28.930639       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0814 16:15:29.026765       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:15:29.026801       1 main.go:299] handling current node
	I0814 16:15:39.026695       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:15:39.026726       1 main.go:299] handling current node
	I0814 16:15:49.025953       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:15:49.025984       1 main.go:299] handling current node
	W0814 16:15:53.004464       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:15:53.004497       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] <==
	I0814 16:12:09.787221       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0814 16:12:30.868587       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47740: use of closed network connection
	E0814 16:12:31.027539       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47762: use of closed network connection
	I0814 16:12:51.088344       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.42.237"}
	E0814 16:13:03.980926       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:43102: read: connection reset by peer
	E0814 16:13:07.177993       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0814 16:13:17.210544       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0814 16:13:18.229359       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0814 16:13:18.482104       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0814 16:13:22.763177       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0814 16:13:23.032966       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.242.205"}
	I0814 16:13:37.765451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.765501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.778273       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.778403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.794591       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.794648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.829342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.829499       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.880567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.880614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0814 16:13:38.830203       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0814 16:13:38.880636       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0814 16:13:38.938900       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0814 16:15:43.839092       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.25.34"}
	
	
	==> kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] <==
	E0814 16:14:22.230004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:14:50.975262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:14:50.975316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:14:53.808919       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:14:53.808961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:00.716492       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:00.716540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:19.931078       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:19.931121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:34.258376       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:34.258432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:42.977286       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:42.977328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:15:43.633084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.401962ms"
	I0814 16:15:43.637991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.851078ms"
	I0814 16:15:43.638143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.651µs"
	I0814 16:15:43.638223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.684µs"
	I0814 16:15:43.639343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.789µs"
	I0814 16:15:45.328065       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0814 16:15:45.329492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="9.178µs"
	I0814 16:15:45.332543       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0814 16:15:45.897850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:45.897887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:15:47.831757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.17699ms"
	I0814 16:15:47.831831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.796µs"
	
	
	==> kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] <==
	I0814 16:10:43.227024       1 server_linux.go:66] "Using iptables proxy"
	I0814 16:10:44.042249       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0814 16:10:44.044518       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:10:44.547086       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0814 16:10:44.547866       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:10:44.735166       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:10:44.735940       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:10:44.736020       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:10:44.837599       1 config.go:197] "Starting service config controller"
	I0814 16:10:44.842877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:10:44.840008       1 config.go:326] "Starting node config controller"
	I0814 16:10:44.843068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:10:44.839635       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:10:44.843146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:10:44.948728       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:10:44.948910       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:10:44.949083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] <==
	W0814 16:10:32.856265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0814 16:10:32.856269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:10:32.856283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0814 16:10:32.856285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:32.856226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:10:32.856307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.755674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.755720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.783564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:10:33.783603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.801227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:10:33.801262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.853752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:10:33.853795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.900742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.900777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.908027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:10:33.908061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.926270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.926313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.946693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 16:10:33.946742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.950999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.951035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 16:10:34.354489       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 16:15:43 addons-146898 kubelet[1651]: I0814 16:15:43.631990    1651 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f46820-47d0-4d6a-882c-807b5a5b4203" containerName="liveness-probe"
	Aug 14 16:15:43 addons-146898 kubelet[1651]: I0814 16:15:43.745083    1651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j6fh\" (UniqueName: \"kubernetes.io/projected/34fb9bf9-829b-4f33-9bda-f095298ddec9-kube-api-access-4j6fh\") pod \"hello-world-app-55bf9c44b4-5q8tm\" (UID: \"34fb9bf9-829b-4f33-9bda-f095298ddec9\") " pod="default/hello-world-app-55bf9c44b4-5q8tm"
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.750184    1651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbvzn\" (UniqueName: \"kubernetes.io/projected/c9f18577-09a8-4168-a9ce-4c3dacaff132-kube-api-access-jbvzn\") pod \"c9f18577-09a8-4168-a9ce-4c3dacaff132\" (UID: \"c9f18577-09a8-4168-a9ce-4c3dacaff132\") "
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.751975    1651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f18577-09a8-4168-a9ce-4c3dacaff132-kube-api-access-jbvzn" (OuterVolumeSpecName: "kube-api-access-jbvzn") pod "c9f18577-09a8-4168-a9ce-4c3dacaff132" (UID: "c9f18577-09a8-4168-a9ce-4c3dacaff132"). InnerVolumeSpecName "kube-api-access-jbvzn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.810345    1651 scope.go:117] "RemoveContainer" containerID="ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8"
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.825295    1651 scope.go:117] "RemoveContainer" containerID="ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8"
	Aug 14 16:15:44 addons-146898 kubelet[1651]: E0814 16:15:44.825700    1651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8\": container with ID starting with ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8 not found: ID does not exist" containerID="ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8"
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.825745    1651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8"} err="failed to get container status \"ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8\": rpc error: code = NotFound desc = could not find container \"ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8\": container with ID starting with ade0b2ef5346a2617043ca597adcf86be6f2a710efe5fe1e59cacff0cf52cdf8 not found: ID does not exist"
	Aug 14 16:15:44 addons-146898 kubelet[1651]: I0814 16:15:44.851027    1651 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jbvzn\" (UniqueName: \"kubernetes.io/projected/c9f18577-09a8-4168-a9ce-4c3dacaff132-kube-api-access-jbvzn\") on node \"addons-146898\" DevicePath \"\""
	Aug 14 16:15:45 addons-146898 kubelet[1651]: I0814 16:15:45.428882    1651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f17f700-6eef-4480-80ff-d27f501fc3c1" path="/var/lib/kubelet/pods/4f17f700-6eef-4480-80ff-d27f501fc3c1/volumes"
	Aug 14 16:15:45 addons-146898 kubelet[1651]: I0814 16:15:45.429414    1651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcdd11d-31e3-4a1c-a124-b6f167bdc974" path="/var/lib/kubelet/pods/bbcdd11d-31e3-4a1c-a124-b6f167bdc974/volumes"
	Aug 14 16:15:45 addons-146898 kubelet[1651]: I0814 16:15:45.429878    1651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f18577-09a8-4168-a9ce-4c3dacaff132" path="/var/lib/kubelet/pods/c9f18577-09a8-4168-a9ce-4c3dacaff132/volumes"
	Aug 14 16:15:45 addons-146898 kubelet[1651]: E0814 16:15:45.591048    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652145590838363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604249,},InodesUsed:&UInt64Value{Value:241,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:15:45 addons-146898 kubelet[1651]: E0814 16:15:45.591078    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652145590838363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604249,},InodesUsed:&UInt64Value{Value:241,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.672922    1651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ca932c-f8e0-4731-b346-eb32978fcfaa-webhook-cert\") pod \"01ca932c-f8e0-4731-b346-eb32978fcfaa\" (UID: \"01ca932c-f8e0-4731-b346-eb32978fcfaa\") "
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.672981    1651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fn6\" (UniqueName: \"kubernetes.io/projected/01ca932c-f8e0-4731-b346-eb32978fcfaa-kube-api-access-58fn6\") pod \"01ca932c-f8e0-4731-b346-eb32978fcfaa\" (UID: \"01ca932c-f8e0-4731-b346-eb32978fcfaa\") "
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.674779    1651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ca932c-f8e0-4731-b346-eb32978fcfaa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "01ca932c-f8e0-4731-b346-eb32978fcfaa" (UID: "01ca932c-f8e0-4731-b346-eb32978fcfaa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.674866    1651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ca932c-f8e0-4731-b346-eb32978fcfaa-kube-api-access-58fn6" (OuterVolumeSpecName: "kube-api-access-58fn6") pod "01ca932c-f8e0-4731-b346-eb32978fcfaa" (UID: "01ca932c-f8e0-4731-b346-eb32978fcfaa"). InnerVolumeSpecName "kube-api-access-58fn6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.774142    1651 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ca932c-f8e0-4731-b346-eb32978fcfaa-webhook-cert\") on node \"addons-146898\" DevicePath \"\""
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.774178    1651 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-58fn6\" (UniqueName: \"kubernetes.io/projected/01ca932c-f8e0-4731-b346-eb32978fcfaa-kube-api-access-58fn6\") on node \"addons-146898\" DevicePath \"\""
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.821795    1651 scope.go:117] "RemoveContainer" containerID="df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4"
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.835908    1651 scope.go:117] "RemoveContainer" containerID="df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4"
	Aug 14 16:15:48 addons-146898 kubelet[1651]: E0814 16:15:48.836293    1651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4\": container with ID starting with df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4 not found: ID does not exist" containerID="df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4"
	Aug 14 16:15:48 addons-146898 kubelet[1651]: I0814 16:15:48.836338    1651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4"} err="failed to get container status \"df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4\": rpc error: code = NotFound desc = could not find container \"df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4\": container with ID starting with df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4 not found: ID does not exist"
	Aug 14 16:15:49 addons-146898 kubelet[1651]: I0814 16:15:49.428618    1651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ca932c-f8e0-4731-b346-eb32978fcfaa" path="/var/lib/kubelet/pods/01ca932c-f8e0-4731-b346-eb32978fcfaa/volumes"
	
	
	==> storage-provisioner [b6661ad7ea490fad210b2513ffa64647cb6ca22e8e580e1f1ed4a268f425110b] <==
	I0814 16:11:00.165855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:11:00.174910       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:11:00.174960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:11:00.182608       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:11:00.182782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2!
	I0814 16:11:00.183220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bcf1568-4c44-443f-a1cf-b3444b909576", APIVersion:"v1", ResourceVersion:"939", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2 became leader
	I0814 16:11:00.283920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-146898 -n addons-146898
helpers_test.go:261: (dbg) Run:  kubectl --context addons-146898 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (302.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.043568ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-79d8t" [a144a102-aafb-4752-9784-1bdb16857bcd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00345692s
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (108.916879ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m31.049408959s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (62.756103ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m34.998048389s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (62.79073ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m38.999798942s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (73.731905ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m42.522638067s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (64.134137ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m49.576795669s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (84.43967ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 2m57.844388993s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (62.6447ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 3m26.113604745s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (63.41997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 4m1.434430103s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (60.667574ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 5m10.89125618s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (61.697932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 6m25.715185754s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-146898 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-146898 top pods -n kube-system: exit status 1 (61.222634ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-rs8rx, age: 7m26.059642982s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-146898
helpers_test.go:235: (dbg) docker inspect addons-146898:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c",
	        "Created": "2024-08-14T16:10:21.614403661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-14T16:10:21.747404839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a625a3e39975c5bf9755ab525e60a1f8bd16cab9b58877622897d26607806095",
	        "ResolvConfPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/hosts",
	        "LogPath": "/var/lib/docker/containers/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c/033665d39c0a7230426af208a0a609390bc8324f3152d84ce1b4a25599238d2c-json.log",
	        "Name": "/addons-146898",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-146898:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-146898",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322-init/diff:/var/lib/docker/overlay2/d41949e4c516eb21351007b40b547059df55afa65c858079d4bf62d2491589b5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8b25431309fbdc2c0fa65361e314b56a81d220c9cda8ed6a3018ac9b0055322/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-146898",
	                "Source": "/var/lib/docker/volumes/addons-146898/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-146898",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-146898",
	                "name.minikube.sigs.k8s.io": "addons-146898",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b76a336ebed6afcb0d8509794e0e8e2f1bfbf9e0bf4ba773dfc123eb3abe017",
	            "SandboxKey": "/var/run/docker/netns/5b76a336ebed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-146898": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "58300098e6e99c5cfa782c54a5523432e4763754ed66ecf3c4f594d976665be8",
	                    "EndpointID": "3c8797cd27cce5ca93d7092488c5ef89fd6c894a75004c754096ec7d9706e668",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-146898",
	                        "033665d39c0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-146898 -n addons-146898
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 logs -n 25: (1.091039366s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-996390                                                                   | download-docker-996390 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-011666   | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | binary-mirror-011666                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38739                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-011666                                                                     | binary-mirror-011666   | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-146898 --wait=true                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | -p addons-146898                                                                            |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | -p addons-146898                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-146898 ssh cat                                                                       | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | /opt/local-path-provisioner/pvc-b8279e68-d1f4-45e9-8a5a-4efa6552cee5_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:13 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-146898 ip                                                                            | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:12 UTC | 14 Aug 24 16:12 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | addons-146898                                                                               |                        |         |         |                     |                     |
	| addons  | addons-146898 addons                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-146898 ssh curl -s                                                                   | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-146898 addons                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-146898 ip                                                                            | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-146898 addons disable                                                                | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:15 UTC | 14 Aug 24 16:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-146898 addons                                                                        | addons-146898          | jenkins | v1.33.1 | 14 Aug 24 16:18 UTC | 14 Aug 24 16:18 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:59
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:59.314800   21995 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:59.314895   21995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:59.314899   21995 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:59.314903   21995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:59.315071   21995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:09:59.315645   21995 out.go:298] Setting JSON to false
	I0814 16:09:59.316461   21995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3143,"bootTime":1723648656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:59.316512   21995 start.go:139] virtualization: kvm guest
	I0814 16:09:59.318739   21995 out.go:177] * [addons-146898] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:09:59.320198   21995 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:09:59.320195   21995 notify.go:220] Checking for updates...
	I0814 16:09:59.323046   21995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:59.324440   21995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:09:59.325670   21995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:09:59.326970   21995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:09:59.328258   21995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:09:59.329837   21995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:59.350965   21995 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:09:59.351094   21995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:59.397696   21995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:59.389210633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:59.397822   21995 docker.go:307] overlay module found
	I0814 16:09:59.399807   21995 out.go:177] * Using the docker driver based on user configuration
	I0814 16:09:59.401466   21995 start.go:297] selected driver: docker
	I0814 16:09:59.401479   21995 start.go:901] validating driver "docker" against <nil>
	I0814 16:09:59.401493   21995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:09:59.402237   21995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:59.445849   21995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:59.437307117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:59.446035   21995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:59.446285   21995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:09:59.448080   21995 out.go:177] * Using Docker driver with root privileges
	I0814 16:09:59.449710   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:09:59.449735   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:09:59.449747   21995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:59.449815   21995 start.go:340] cluster config:
	{Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:59.451289   21995 out.go:177] * Starting "addons-146898" primary control-plane node in "addons-146898" cluster
	I0814 16:09:59.452617   21995 cache.go:121] Beginning downloading kic base image for docker with crio
	I0814 16:09:59.453876   21995 out.go:177] * Pulling base image v0.0.44-1723567951-19429 ...
	I0814 16:09:59.455042   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:59.455066   21995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 16:09:59.455072   21995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:59.455197   21995 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:59.455302   21995 preload.go:172] Found /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:09:59.455324   21995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:09:59.455647   21995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json ...
	I0814 16:09:59.455671   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json: {Name:mka383384adb62e92ac44fa7a4a5b834aec85f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:09:59.470575   21995 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 16:09:59.470675   21995 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 16:09:59.470696   21995 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory, skipping pull
	I0814 16:09:59.470704   21995 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 exists in cache, skipping pull
	I0814 16:09:59.470711   21995 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	I0814 16:09:59.470717   21995 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from local cache
	I0814 16:10:11.599154   21995 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 from cached tarball
	I0814 16:10:11.599200   21995 cache.go:194] Successfully downloaded all kic artifacts
	I0814 16:10:11.599229   21995 start.go:360] acquireMachinesLock for addons-146898: {Name:mk6fb8e1c94b5fd8a8fbd9c1b18b8acac474bc30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:11.599325   21995 start.go:364] duration metric: took 77.642µs to acquireMachinesLock for "addons-146898"
	I0814 16:10:11.599346   21995 start.go:93] Provisioning new machine with config: &{Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:11.599424   21995 start.go:125] createHost starting for "" (driver="docker")
	I0814 16:10:11.601229   21995 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0814 16:10:11.601461   21995 start.go:159] libmachine.API.Create for "addons-146898" (driver="docker")
	I0814 16:10:11.601496   21995 client.go:168] LocalClient.Create starting
	I0814 16:10:11.601602   21995 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem
	I0814 16:10:11.763532   21995 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem
	I0814 16:10:11.964158   21995 cli_runner.go:164] Run: docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 16:10:11.979557   21995 cli_runner.go:211] docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 16:10:11.979625   21995 network_create.go:284] running [docker network inspect addons-146898] to gather additional debugging logs...
	I0814 16:10:11.979642   21995 cli_runner.go:164] Run: docker network inspect addons-146898
	W0814 16:10:11.994695   21995 cli_runner.go:211] docker network inspect addons-146898 returned with exit code 1
	I0814 16:10:11.994728   21995 network_create.go:287] error running [docker network inspect addons-146898]: docker network inspect addons-146898: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-146898 not found
	I0814 16:10:11.994745   21995 network_create.go:289] output of [docker network inspect addons-146898]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-146898 not found
	
	** /stderr **
	I0814 16:10:11.994860   21995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 16:10:12.010588   21995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f4790}
	I0814 16:10:12.010639   21995 network_create.go:124] attempt to create docker network addons-146898 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 16:10:12.010699   21995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-146898 addons-146898
	I0814 16:10:12.070239   21995 network_create.go:108] docker network addons-146898 192.168.49.0/24 created
	I0814 16:10:12.070269   21995 kic.go:121] calculated static IP "192.168.49.2" for the "addons-146898" container
	I0814 16:10:12.070316   21995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0814 16:10:12.085419   21995 cli_runner.go:164] Run: docker volume create addons-146898 --label name.minikube.sigs.k8s.io=addons-146898 --label created_by.minikube.sigs.k8s.io=true
	I0814 16:10:12.102037   21995 oci.go:103] Successfully created a docker volume addons-146898
	I0814 16:10:12.102127   21995 cli_runner.go:164] Run: docker run --rm --name addons-146898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --entrypoint /usr/bin/test -v addons-146898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib
	I0814 16:10:17.100088   21995 cli_runner.go:217] Completed: docker run --rm --name addons-146898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --entrypoint /usr/bin/test -v addons-146898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -d /var/lib: (4.997925179s)
	I0814 16:10:17.100115   21995 oci.go:107] Successfully prepared a docker volume addons-146898
	I0814 16:10:17.100130   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:17.100149   21995 kic.go:194] Starting extracting preloaded images to volume ...
	I0814 16:10:17.100198   21995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-146898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 16:10:21.551307   21995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-146898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 -I lz4 -xf /preloaded.tar -C /extractDir: (4.451074031s)
	I0814 16:10:21.551336   21995 kic.go:203] duration metric: took 4.45118457s to extract preloaded images to volume ...
	W0814 16:10:21.551465   21995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0814 16:10:21.551574   21995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 16:10:21.599983   21995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-146898 --name addons-146898 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-146898 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-146898 --network addons-146898 --ip 192.168.49.2 --volume addons-146898:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083
	I0814 16:10:21.906833   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Running}}
	I0814 16:10:21.924643   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:21.942254   21995 cli_runner.go:164] Run: docker exec addons-146898 stat /var/lib/dpkg/alternatives/iptables
	I0814 16:10:21.982856   21995 oci.go:144] the created container "addons-146898" has a running status.
	I0814 16:10:21.982886   21995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa...
	I0814 16:10:22.151232   21995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 16:10:22.171650   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:22.191411   21995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 16:10:22.191433   21995 kic_runner.go:114] Args: [docker exec --privileged addons-146898 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 16:10:22.252701   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:22.272149   21995 machine.go:94] provisionDockerMachine start ...
	I0814 16:10:22.272256   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.288748   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.288957   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.288971   21995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 16:10:22.504203   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-146898
	
	I0814 16:10:22.504227   21995 ubuntu.go:169] provisioning hostname "addons-146898"
	I0814 16:10:22.504272   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.521315   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.521566   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.521592   21995 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-146898 && echo "addons-146898" | sudo tee /etc/hostname
	I0814 16:10:22.659715   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-146898
	
	I0814 16:10:22.659794   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.676059   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:22.676266   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:22.676291   21995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-146898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-146898/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-146898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:10:22.800787   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:22.800816   21995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13813/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13813/.minikube}
	I0814 16:10:22.800854   21995 ubuntu.go:177] setting up certificates
	I0814 16:10:22.800867   21995 provision.go:84] configureAuth start
	I0814 16:10:22.800921   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:22.816772   21995 provision.go:143] copyHostCerts
	I0814 16:10:22.816848   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/key.pem (1679 bytes)
	I0814 16:10:22.816978   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/ca.pem (1078 bytes)
	I0814 16:10:22.817083   21995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13813/.minikube/cert.pem (1123 bytes)
	I0814 16:10:22.817169   21995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem org=jenkins.addons-146898 san=[127.0.0.1 192.168.49.2 addons-146898 localhost minikube]
	I0814 16:10:22.902600   21995 provision.go:177] copyRemoteCerts
	I0814 16:10:22.902663   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:10:22.902704   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:22.918646   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.009130   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:10:23.030136   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:10:23.050062   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:10:23.070575   21995 provision.go:87] duration metric: took 269.69399ms to configureAuth
	I0814 16:10:23.070611   21995 ubuntu.go:193] setting minikube options for container-runtime
	I0814 16:10:23.070780   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:23.070887   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.087360   21995 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:23.087539   21995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0814 16:10:23.087563   21995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:10:23.297175   21995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:10:23.297204   21995 machine.go:97] duration metric: took 1.025009823s to provisionDockerMachine
	I0814 16:10:23.297217   21995 client.go:171] duration metric: took 11.695713856s to LocalClient.Create
	I0814 16:10:23.297238   21995 start.go:167] duration metric: took 11.695777559s to libmachine.API.Create "addons-146898"
	I0814 16:10:23.297250   21995 start.go:293] postStartSetup for "addons-146898" (driver="docker")
	I0814 16:10:23.297264   21995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:10:23.297320   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:10:23.297365   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.313668   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.409471   21995 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:10:23.412584   21995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 16:10:23.412654   21995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 16:10:23.412667   21995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 16:10:23.412675   21995 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0814 16:10:23.412685   21995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13813/.minikube/addons for local assets ...
	I0814 16:10:23.412744   21995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13813/.minikube/files for local assets ...
	I0814 16:10:23.412766   21995 start.go:296] duration metric: took 115.510043ms for postStartSetup
	I0814 16:10:23.413052   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:23.430533   21995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/config.json ...
	I0814 16:10:23.430768   21995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:10:23.430811   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.447602   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.533366   21995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0814 16:10:23.537135   21995 start.go:128] duration metric: took 11.937698863s to createHost
	I0814 16:10:23.537153   21995 start.go:83] releasing machines lock for "addons-146898", held for 11.93781786s
	I0814 16:10:23.537201   21995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-146898
	I0814 16:10:23.552354   21995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:10:23.552426   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.552362   21995 ssh_runner.go:195] Run: cat /version.json
	I0814 16:10:23.552547   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:23.572255   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.572608   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:23.736498   21995 ssh_runner.go:195] Run: systemctl --version
	I0814 16:10:23.740633   21995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:10:23.877823   21995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 16:10:23.882233   21995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:23.899571   21995 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0814 16:10:23.899658   21995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:23.925228   21995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0814 16:10:23.925272   21995 start.go:495] detecting cgroup driver to use...
	I0814 16:10:23.925309   21995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0814 16:10:23.925371   21995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:10:23.939525   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:10:23.949687   21995 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:10:23.949739   21995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:10:23.962691   21995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:10:23.975985   21995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:10:24.054908   21995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:10:24.130206   21995 docker.go:233] disabling docker service ...
	I0814 16:10:24.130278   21995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:10:24.147251   21995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:10:24.157614   21995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:10:24.228902   21995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:10:24.311000   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:10:24.321749   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:10:24.336358   21995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:10:24.336431   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.345715   21995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:10:24.345780   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.354949   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.363979   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.373561   21995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:10:24.382756   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.391605   21995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.406447   21995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:24.415919   21995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:10:24.423559   21995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:10:24.431437   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:24.501627   21995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:10:24.591905   21995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:10:24.591988   21995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:10:24.595358   21995 start.go:563] Will wait 60s for crictl version
	I0814 16:10:24.595423   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:10:24.598438   21995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:10:24.631363   21995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0814 16:10:24.631523   21995 ssh_runner.go:195] Run: crio --version
	I0814 16:10:24.666001   21995 ssh_runner.go:195] Run: crio --version
	I0814 16:10:24.701237   21995 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0814 16:10:24.702465   21995 cli_runner.go:164] Run: docker network inspect addons-146898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 16:10:24.718411   21995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 16:10:24.721759   21995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:24.731577   21995 kubeadm.go:883] updating cluster {Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:10:24.731687   21995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:24.731736   21995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:24.794777   21995 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:24.794798   21995 crio.go:433] Images already preloaded, skipping extraction
	I0814 16:10:24.794839   21995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:24.826224   21995 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:24.826244   21995 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:10:24.826252   21995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0814 16:10:24.826338   21995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-146898 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:10:24.826398   21995 ssh_runner.go:195] Run: crio config
	I0814 16:10:24.867740   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:10:24.867765   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:10:24.867777   21995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:10:24.867805   21995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-146898 NodeName:addons-146898 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:10:24.867964   21995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-146898"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:10:24.868024   21995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:10:24.876179   21995 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:10:24.876239   21995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 16:10:24.883999   21995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0814 16:10:24.899859   21995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:10:24.916572   21995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0814 16:10:24.932469   21995 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 16:10:24.935660   21995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:24.945699   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:25.022840   21995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:25.035349   21995 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898 for IP: 192.168.49.2
	I0814 16:10:25.035377   21995 certs.go:194] generating shared ca certs ...
	I0814 16:10:25.035400   21995 certs.go:226] acquiring lock for ca certs: {Name:mk1285ad10e917a8c21c37d6bbfc6630b395fe15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.035524   21995 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key
	I0814 16:10:25.145375   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt ...
	I0814 16:10:25.145402   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt: {Name:mk56de38a5a6a065840a53302703be75913b7540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.145560   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key ...
	I0814 16:10:25.145570   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key: {Name:mkb10292095ca52c2c9f762c536853026e7bd0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.145649   21995 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key
	I0814 16:10:25.372318   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt ...
	I0814 16:10:25.372352   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt: {Name:mk0c143209c2ba1cec1748241e70c4402f002142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.372523   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key ...
	I0814 16:10:25.372535   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key: {Name:mk1f10b962886ecc72031545289125c894d8b027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.372677   21995 certs.go:256] generating profile certs ...
	I0814 16:10:25.372729   21995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key
	I0814 16:10:25.372751   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt with IP's: []
	I0814 16:10:25.846336   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt ...
	I0814 16:10:25.846366   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: {Name:mkbab85371741d02d75da73bad152a83d2c5d78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.846529   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key ...
	I0814 16:10:25.846539   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.key: {Name:mkd682f4af1dbea838f8a0ca34c27f4648750679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:25.846611   21995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49
	I0814 16:10:25.846630   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0814 16:10:26.031631   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 ...
	I0814 16:10:26.031659   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49: {Name:mk29493867ecc85fd95db1c4f44fb6995940b598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.031811   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49 ...
	I0814 16:10:26.031825   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49: {Name:mk12fb7d0295d04ac0c5b1f91ae368bae36df922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.031892   21995 certs.go:381] copying /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt.438dba49 -> /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt
	I0814 16:10:26.031978   21995 certs.go:385] copying /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key.438dba49 -> /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key
	I0814 16:10:26.032029   21995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key
	I0814 16:10:26.032046   21995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt with IP's: []
	I0814 16:10:26.502174   21995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt ...
	I0814 16:10:26.502210   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt: {Name:mkf78e1ebb0c27c1a090b66eac20e6a91bb44b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.502390   21995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key ...
	I0814 16:10:26.502400   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key: {Name:mk0562feedc116c51ceaaa271ce12c328e2b3fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:26.502566   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:10:26.502599   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:10:26.502624   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:10:26.502646   21995 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13813/.minikube/certs/key.pem (1679 bytes)
	I0814 16:10:26.503229   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:10:26.525140   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:10:26.545698   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:10:26.567345   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:10:26.589135   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 16:10:26.609794   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:10:26.630516   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:10:26.653458   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:10:26.675308   21995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13813/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:10:26.696489   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:10:26.711702   21995 ssh_runner.go:195] Run: openssl version
	I0814 16:10:26.716703   21995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:10:26.724803   21995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.727905   21995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.728000   21995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:26.734007   21995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:10:26.742190   21995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:10:26.745293   21995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:10:26.745338   21995 kubeadm.go:392] StartCluster: {Name:addons-146898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-146898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:26.745412   21995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:10:26.745460   21995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:10:26.778708   21995 cri.go:89] found id: ""
	I0814 16:10:26.778771   21995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:10:26.786818   21995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:10:26.794455   21995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0814 16:10:26.794503   21995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:10:26.802178   21995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:10:26.802193   21995 kubeadm.go:157] found existing configuration files:
	
	I0814 16:10:26.802232   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:10:26.809644   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:10:26.809701   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:10:26.816827   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:10:26.824261   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:10:26.824314   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:10:26.831763   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:10:26.839741   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:10:26.839789   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:10:26.847303   21995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:10:26.855171   21995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:10:26.855231   21995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:10:26.862683   21995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 16:10:26.897687   21995 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:10:26.897748   21995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:10:26.913810   21995 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0814 16:10:26.913876   21995 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0814 16:10:26.913905   21995 kubeadm.go:310] OS: Linux
	I0814 16:10:26.914007   21995 kubeadm.go:310] CGROUPS_CPU: enabled
	I0814 16:10:26.914107   21995 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0814 16:10:26.914186   21995 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0814 16:10:26.914266   21995 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0814 16:10:26.914346   21995 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0814 16:10:26.914420   21995 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0814 16:10:26.914493   21995 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0814 16:10:26.914559   21995 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0814 16:10:26.914621   21995 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0814 16:10:26.963124   21995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:10:26.963267   21995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:10:26.963398   21995 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:10:26.969795   21995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:10:26.972364   21995 out.go:204]   - Generating certificates and keys ...
	I0814 16:10:26.972461   21995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:10:26.972541   21995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:10:27.170427   21995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:10:27.496497   21995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:10:27.711288   21995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:10:27.827672   21995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:10:27.966921   21995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:10:27.967035   21995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-146898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 16:10:28.044011   21995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:10:28.044149   21995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-146898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0814 16:10:28.127325   21995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:10:28.198683   21995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:10:28.395148   21995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:10:28.395214   21995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:10:28.524084   21995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:10:28.817223   21995 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:10:29.156724   21995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:10:29.431848   21995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:10:29.641353   21995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:10:29.642621   21995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:10:29.645247   21995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:10:29.647223   21995 out.go:204]   - Booting up control plane ...
	I0814 16:10:29.647356   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:10:29.647463   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:10:29.648330   21995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:10:29.659996   21995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:10:29.665342   21995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:10:29.665404   21995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:10:29.738614   21995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:10:29.738723   21995 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:10:30.240106   21995 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.600369ms
	I0814 16:10:30.240216   21995 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:10:34.741915   21995 kubeadm.go:310] [api-check] The API server is healthy after 4.501781554s
	I0814 16:10:34.751954   21995 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:10:34.763579   21995 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:10:34.780232   21995 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:10:34.780457   21995 kubeadm.go:310] [mark-control-plane] Marking the node addons-146898 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:10:34.786848   21995 kubeadm.go:310] [bootstrap-token] Using token: rjop2a.3xxdxyu2rw5j4mls
	I0814 16:10:34.788398   21995 out.go:204]   - Configuring RBAC rules ...
	I0814 16:10:34.788533   21995 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:10:34.791209   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:10:34.795776   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:10:34.798697   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:10:34.800870   21995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:10:34.802890   21995 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:10:35.147551   21995 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:10:35.565924   21995 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:10:36.148537   21995 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:10:36.149554   21995 kubeadm.go:310] 
	I0814 16:10:36.149651   21995 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:10:36.149673   21995 kubeadm.go:310] 
	I0814 16:10:36.149752   21995 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:10:36.149767   21995 kubeadm.go:310] 
	I0814 16:10:36.149827   21995 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:10:36.149908   21995 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:10:36.149958   21995 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:10:36.149987   21995 kubeadm.go:310] 
	I0814 16:10:36.150115   21995 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:10:36.150148   21995 kubeadm.go:310] 
	I0814 16:10:36.150211   21995 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:10:36.150223   21995 kubeadm.go:310] 
	I0814 16:10:36.150294   21995 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:10:36.150400   21995 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:10:36.150505   21995 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:10:36.150521   21995 kubeadm.go:310] 
	I0814 16:10:36.150634   21995 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:10:36.150751   21995 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:10:36.150763   21995 kubeadm.go:310] 
	I0814 16:10:36.150868   21995 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rjop2a.3xxdxyu2rw5j4mls \
	I0814 16:10:36.151000   21995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e78517872b4f8b632b00f802290dddbf43139dde7a5a320b299f5698ab99227 \
	I0814 16:10:36.151029   21995 kubeadm.go:310] 	--control-plane 
	I0814 16:10:36.151035   21995 kubeadm.go:310] 
	I0814 16:10:36.151132   21995 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:10:36.151141   21995 kubeadm.go:310] 
	I0814 16:10:36.151228   21995 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rjop2a.3xxdxyu2rw5j4mls \
	I0814 16:10:36.151328   21995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9e78517872b4f8b632b00f802290dddbf43139dde7a5a320b299f5698ab99227 
	I0814 16:10:36.153189   21995 kubeadm.go:310] W0814 16:10:26.895177    1299 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:36.153463   21995 kubeadm.go:310] W0814 16:10:26.895768    1299 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:36.153647   21995 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0814 16:10:36.153744   21995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:10:36.153764   21995 cni.go:84] Creating CNI manager for ""
	I0814 16:10:36.153774   21995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:10:36.156438   21995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 16:10:36.157796   21995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 16:10:36.161471   21995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 16:10:36.161491   21995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 16:10:36.177769   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 16:10:36.370500   21995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:10:36.370588   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:36.370588   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-146898 minikube.k8s.io/updated_at=2024_08_14T16_10_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=addons-146898 minikube.k8s.io/primary=true
	I0814 16:10:36.377437   21995 ops.go:34] apiserver oom_adj: -16
	I0814 16:10:36.457868   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:36.958471   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:37.458216   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:37.958777   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:38.458773   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:38.958312   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:39.458290   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:39.957984   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:40.458060   21995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:40.544372   21995 kubeadm.go:1113] duration metric: took 4.173855902s to wait for elevateKubeSystemPrivileges
	I0814 16:10:40.544407   21995 kubeadm.go:394] duration metric: took 13.799071853s to StartCluster
	I0814 16:10:40.544424   21995 settings.go:142] acquiring lock: {Name:mka72e833cc56b9ba293232cfc25e94fae8a2ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:40.544534   21995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:10:40.545051   21995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13813/kubeconfig: {Name:mkf1cd97562485c31d14c03886c1adfb8630debe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:40.545274   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:10:40.545304   21995 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:40.545359   21995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0814 16:10:40.545460   21995 addons.go:69] Setting yakd=true in profile "addons-146898"
	I0814 16:10:40.545475   21995 addons.go:69] Setting default-storageclass=true in profile "addons-146898"
	I0814 16:10:40.545487   21995 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-146898"
	I0814 16:10:40.545497   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:40.545499   21995 addons.go:69] Setting cloud-spanner=true in profile "addons-146898"
	I0814 16:10:40.545510   21995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-146898"
	I0814 16:10:40.545517   21995 addons.go:69] Setting storage-provisioner=true in profile "addons-146898"
	I0814 16:10:40.545534   21995 addons.go:234] Setting addon storage-provisioner=true in "addons-146898"
	I0814 16:10:40.545534   21995 addons.go:234] Setting addon cloud-spanner=true in "addons-146898"
	I0814 16:10:40.545532   21995 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-146898"
	I0814 16:10:40.545544   21995 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-146898"
	I0814 16:10:40.545549   21995 addons.go:69] Setting volcano=true in profile "addons-146898"
	I0814 16:10:40.545560   21995 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-146898"
	I0814 16:10:40.545565   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545567   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545567   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545575   21995 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-146898"
	I0814 16:10:40.545597   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545601   21995 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-146898"
	I0814 16:10:40.545571   21995 addons.go:234] Setting addon volcano=true in "addons-146898"
	I0814 16:10:40.545674   21995 addons.go:69] Setting ingress-dns=true in profile "addons-146898"
	I0814 16:10:40.545689   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545694   21995 addons.go:234] Setting addon ingress-dns=true in "addons-146898"
	I0814 16:10:40.545724   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.545882   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545897   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546052   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546054   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546089   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546102   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546200   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.546244   21995 addons.go:69] Setting inspektor-gadget=true in profile "addons-146898"
	I0814 16:10:40.546272   21995 addons.go:234] Setting addon inspektor-gadget=true in "addons-146898"
	I0814 16:10:40.546302   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.546738   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545508   21995 addons.go:69] Setting helm-tiller=true in profile "addons-146898"
	I0814 16:10:40.549774   21995 addons.go:234] Setting addon helm-tiller=true in "addons-146898"
	I0814 16:10:40.549818   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.550357   21995 out.go:177] * Verifying Kubernetes components...
	I0814 16:10:40.550494   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.551913   21995 addons.go:69] Setting ingress=true in profile "addons-146898"
	I0814 16:10:40.551957   21995 addons.go:234] Setting addon ingress=true in "addons-146898"
	I0814 16:10:40.552020   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.552554   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.553523   21995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:40.545503   21995 addons.go:234] Setting addon yakd=true in "addons-146898"
	I0814 16:10:40.553636   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.554055   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.554547   21995 addons.go:69] Setting metrics-server=true in profile "addons-146898"
	I0814 16:10:40.554604   21995 addons.go:234] Setting addon metrics-server=true in "addons-146898"
	I0814 16:10:40.554639   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.555791   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545465   21995 addons.go:69] Setting gcp-auth=true in profile "addons-146898"
	I0814 16:10:40.555878   21995 addons.go:69] Setting volumesnapshots=true in profile "addons-146898"
	I0814 16:10:40.556148   21995 addons.go:234] Setting addon volumesnapshots=true in "addons-146898"
	I0814 16:10:40.556212   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.556311   21995 mustload.go:65] Loading cluster: addons-146898
	I0814 16:10:40.556498   21995 config.go:182] Loaded profile config "addons-146898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:40.556695   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.555883   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.545513   21995 addons.go:69] Setting registry=true in profile "addons-146898"
	I0814 16:10:40.557599   21995 addons.go:234] Setting addon registry=true in "addons-146898"
	I0814 16:10:40.557677   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.558169   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.589150   21995 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0814 16:10:40.589497   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.589500   21995 addons.go:234] Setting addon default-storageclass=true in "addons-146898"
	I0814 16:10:40.590837   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.591345   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.591379   21995 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0814 16:10:40.591396   21995 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0814 16:10:40.591453   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	W0814 16:10:40.595734   21995 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0814 16:10:40.596348   21995 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0814 16:10:40.599689   21995 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:40.599707   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0814 16:10:40.599758   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.601378   21995 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-146898"
	I0814 16:10:40.601426   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.601878   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:40.616154   21995 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0814 16:10:40.617836   21995 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:40.617863   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0814 16:10:40.617926   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.620193   21995 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0814 16:10:40.620236   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0814 16:10:40.621496   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0814 16:10:40.621516   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0814 16:10:40.621558   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0814 16:10:40.621573   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.624320   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0814 16:10:40.624380   21995 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0814 16:10:40.627557   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0814 16:10:40.627580   21995 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0814 16:10:40.627639   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.627807   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0814 16:10:40.629460   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0814 16:10:40.630748   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0814 16:10:40.632007   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0814 16:10:40.633304   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0814 16:10:40.637123   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0814 16:10:40.637154   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0814 16:10:40.637220   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.645908   21995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:10:40.647241   21995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:40.647261   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:10:40.647320   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.651578   21995 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0814 16:10:40.652673   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0814 16:10:40.652692   21995 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0814 16:10:40.652748   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.657199   21995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:40.657219   21995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:10:40.657272   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.676770   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0814 16:10:40.677120   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:40.679455   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:40.681021   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:40.682451   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.682672   21995 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:40.682683   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0814 16:10:40.682730   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.685980   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.689084   21995 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0814 16:10:40.690578   21995 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0814 16:10:40.692551   21995 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:40.692570   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0814 16:10:40.692626   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.692790   21995 out.go:177]   - Using image docker.io/busybox:stable
	I0814 16:10:40.694709   21995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:40.694724   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0814 16:10:40.694779   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.694789   21995 out.go:177]   - Using image docker.io/registry:2.8.3
	I0814 16:10:40.695074   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.698120   21995 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0814 16:10:40.698161   21995 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0814 16:10:40.699236   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 16:10:40.699260   21995 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 16:10:40.699314   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.703650   21995 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0814 16:10:40.703672   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0814 16:10:40.703786   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.703777   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.704206   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:40.716710   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.718953   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.728219   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.728941   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.729401   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:10:40.739106   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.739459   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.739633   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	W0814 16:10:40.745486   21995 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0814 16:10:40.745517   21995 retry.go:31] will retry after 270.544191ms: ssh: handshake failed: EOF
	I0814 16:10:40.750685   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.758727   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:40.946132   21995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:41.129133   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0814 16:10:41.129208   21995 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0814 16:10:41.144261   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0814 16:10:41.144358   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0814 16:10:41.145741   21995 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0814 16:10:41.145805   21995 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0814 16:10:41.225823   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:41.228239   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:41.235116   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:41.329774   21995 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0814 16:10:41.329807   21995 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0814 16:10:41.336223   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 16:10:41.336308   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0814 16:10:41.336621   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0814 16:10:41.336677   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0814 16:10:41.337202   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:41.339463   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:41.341737   21995 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:41.341790   21995 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0814 16:10:41.349555   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0814 16:10:41.349586   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0814 16:10:41.425610   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0814 16:10:41.425708   21995 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0814 16:10:41.449965   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:41.532066   21995 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0814 16:10:41.532094   21995 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0814 16:10:41.538040   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:41.541457   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0814 16:10:41.541485   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0814 16:10:41.545573   21995 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0814 16:10:41.545601   21995 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0814 16:10:41.629713   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:41.634874   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 16:10:41.634901   21995 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 16:10:41.646193   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0814 16:10:41.646225   21995 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0814 16:10:41.733066   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0814 16:10:41.733149   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0814 16:10:41.745724   21995 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:41.745752   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0814 16:10:41.825571   21995 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0814 16:10:41.825686   21995 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0814 16:10:41.827933   21995 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0814 16:10:41.828007   21995 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0814 16:10:41.828335   21995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:41.828393   21995 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 16:10:41.926954   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:42.027581   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:42.036924   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0814 16:10:42.036953   21995 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0814 16:10:42.049604   21995 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.320170581s)
	I0814 16:10:42.049639   21995 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0814 16:10:42.050200   21995 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.104029969s)
	I0814 16:10:42.051180   21995 node_ready.go:35] waiting up to 6m0s for node "addons-146898" to be "Ready" ...
	I0814 16:10:42.139705   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0814 16:10:42.139797   21995 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0814 16:10:42.147371   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0814 16:10:42.147444   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0814 16:10:42.331860   21995 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0814 16:10:42.331885   21995 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0814 16:10:42.433208   21995 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:42.433284   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0814 16:10:42.627989   21995 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0814 16:10:42.628026   21995 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0814 16:10:42.737435   21995 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0814 16:10:42.737468   21995 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0814 16:10:42.828269   21995 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-146898" context rescaled to 1 replicas
	I0814 16:10:42.832098   21995 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:42.832182   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0814 16:10:42.935314   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0814 16:10:42.935398   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0814 16:10:42.945944   21995 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:42.945969   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0814 16:10:42.947644   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:43.141650   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:43.343986   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:43.438125   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0814 16:10:43.438196   21995 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0814 16:10:43.734540   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0814 16:10:43.734639   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0814 16:10:43.846925   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0814 16:10:43.846952   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0814 16:10:44.031139   21995 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:44.031174   21995 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0814 16:10:44.139151   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.913012983s)
	I0814 16:10:44.143959   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:44.152122   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:46.633256   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:46.649437   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.421153504s)
	I0814 16:10:46.649647   21995 addons.go:475] Verifying addon ingress=true in "addons-146898"
	I0814 16:10:46.649674   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.31241243s)
	I0814 16:10:46.649772   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.310171316s)
	I0814 16:10:46.649835   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.199845879s)
	I0814 16:10:46.649867   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.111800479s)
	I0814 16:10:46.649914   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.020115931s)
	I0814 16:10:46.649956   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.722923905s)
	I0814 16:10:46.649967   21995 addons.go:475] Verifying addon registry=true in "addons-146898"
	I0814 16:10:46.650086   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.622412894s)
	I0814 16:10:46.650104   21995 addons.go:475] Verifying addon metrics-server=true in "addons-146898"
	I0814 16:10:46.650149   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.702476698s)
	I0814 16:10:46.649609   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.414424906s)
	I0814 16:10:46.651156   21995 out.go:177] * Verifying ingress addon...
	I0814 16:10:46.652195   21995 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-146898 service yakd-dashboard -n yakd-dashboard
	
	I0814 16:10:46.652230   21995 out.go:177] * Verifying registry addon...
	I0814 16:10:46.654027   21995 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0814 16:10:46.728036   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0814 16:10:46.740919   21995 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0814 16:10:46.740947   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0814 16:10:46.741416   21995 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0814 16:10:46.830398   21995 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:46.830422   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.158757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:47.231716   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.648951   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.507251997s)
	W0814 16:10:47.649002   21995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:47.649050   21995 retry.go:31] will retry after 289.159835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:47.649046   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.304996419s)
	I0814 16:10:47.659160   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:47.828016   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:47.883745   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0814 16:10:47.883812   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:47.900142   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:47.939257   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:48.147337   21995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0814 16:10:48.227194   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:48.230807   21995 addons.go:234] Setting addon gcp-auth=true in "addons-146898"
	I0814 16:10:48.230863   21995 host.go:66] Checking if "addons-146898" exists ...
	I0814 16:10:48.231401   21995 cli_runner.go:164] Run: docker container inspect addons-146898 --format={{.State.Status}}
	I0814 16:10:48.258896   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:48.262891   21995 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0814 16:10:48.262965   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-146898
	I0814 16:10:48.280399   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/addons-146898/id_rsa Username:docker}
	I0814 16:10:48.361339   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.209164402s)
	I0814 16:10:48.361375   21995 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-146898"
	I0814 16:10:48.363003   21995 out.go:177] * Verifying csi-hostpath-driver addon...
	I0814 16:10:48.365336   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0814 16:10:48.429511   21995 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:10:48.429539   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:48.658025   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:48.730891   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:48.868843   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:49.054299   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:49.158521   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:49.231170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:49.368445   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:49.657471   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:49.731412   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:49.868931   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:50.158239   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:50.231311   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:50.428222   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:50.731752   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:50.732028   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:50.868538   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:51.158359   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:51.185534   21995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.246222096s)
	I0814 16:10:51.185584   21995 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.922656116s)
	I0814 16:10:51.187466   21995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:51.188824   21995 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0814 16:10:51.190029   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0814 16:10:51.190043   21995 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0814 16:10:51.231979   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:51.239251   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0814 16:10:51.239274   21995 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0814 16:10:51.257557   21995 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:10:51.257580   21995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0814 16:10:51.274358   21995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:10:51.369469   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:51.554666   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:51.662560   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:51.731034   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:51.856122   21995 addons.go:475] Verifying addon gcp-auth=true in "addons-146898"
	I0814 16:10:51.858207   21995 out.go:177] * Verifying gcp-auth addon...
	I0814 16:10:51.860477   21995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0814 16:10:51.926709   21995 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 16:10:51.926735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:51.927545   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:52.158441   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:52.231082   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:52.364071   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:52.368701   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:52.658246   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:52.731518   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:52.864396   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:52.868585   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:53.158575   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:53.231421   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:53.364211   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:53.368185   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:53.554690   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:53.659757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:53.731622   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:53.864375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:53.868520   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:54.158366   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:54.231512   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:54.364409   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:54.368544   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:54.657857   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:54.730917   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:54.863169   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:54.868274   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:55.158313   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:55.258447   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:55.363996   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:55.368317   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:55.554957   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:55.657780   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:55.730770   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:55.863977   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:55.868076   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:56.157510   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:56.231344   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:56.363761   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:56.367801   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:56.657157   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:56.731046   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:56.863605   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:56.867659   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:57.157842   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:57.230745   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:57.363160   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:57.368018   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:57.657440   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:57.731660   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:57.864486   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:57.868579   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:58.053705   21995 node_ready.go:53] node "addons-146898" has status "Ready":"False"
	I0814 16:10:58.157394   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:58.231713   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:58.363264   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:58.368434   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:58.657854   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:58.731135   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:58.863772   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:58.867956   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:59.227884   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.232700   21995 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:59.232726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.365427   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:59.368823   21995 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:10:59.368848   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:10:59.554873   21995 node_ready.go:49] node "addons-146898" has status "Ready":"True"
	I0814 16:10:59.554903   21995 node_ready.go:38] duration metric: took 17.503694802s for node "addons-146898" to be "Ready" ...
	I0814 16:10:59.554916   21995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:10:59.563449   21995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.657795   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.732827   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.864807   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:10:59.869274   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:00.159862   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.259085   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.363298   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:00.369124   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:00.657915   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.731170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.863191   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:00.869375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.068436   21995 pod_ready.go:92] pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.068459   21995 pod_ready.go:81] duration metric: took 1.504984661s for pod "coredns-6f6b679f8f-rs8rx" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.068479   21995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.072459   21995 pod_ready.go:92] pod "etcd-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.072479   21995 pod_ready.go:81] duration metric: took 3.994113ms for pod "etcd-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.072492   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.076356   21995 pod_ready.go:92] pod "kube-apiserver-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.076377   21995 pod_ready.go:81] duration metric: took 3.877145ms for pod "kube-apiserver-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.076386   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.079842   21995 pod_ready.go:92] pod "kube-controller-manager-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.079859   21995 pod_ready.go:81] duration metric: took 3.466937ms for pod "kube-controller-manager-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.079870   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g8sfq" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.155544   21995 pod_ready.go:92] pod "kube-proxy-g8sfq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.155564   21995 pod_ready.go:81] duration metric: took 75.687206ms for pod "kube-proxy-g8sfq" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.155574   21995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.158363   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.231683   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.364029   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:01.369284   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.555850   21995 pod_ready.go:92] pod "kube-scheduler-addons-146898" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:01.555876   21995 pod_ready.go:81] duration metric: took 400.294997ms for pod "kube-scheduler-addons-146898" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.555890   21995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:01.657968   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.731490   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.863458   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:01.869568   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.157489   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.231576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.363732   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:02.370310   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.658871   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.731923   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.864007   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:02.869111   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.159431   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.231776   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.364662   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.370373   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.561277   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:03.658989   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.731982   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.864085   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.869488   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.158634   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.232273   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.364293   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.370043   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.657909   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.731968   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.864325   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.869350   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.157906   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.232265   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.363762   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.368899   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.562241   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:05.658832   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.760185   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.863516   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.869619   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.158553   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.232237   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.364933   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.369303   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.658211   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.731738   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.864180   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.869973   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.158636   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.232114   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.363804   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.370980   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.658256   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.759240   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.863908   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.868846   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.060823   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:08.157949   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.231037   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.363773   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.368750   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.658557   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.731803   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.864420   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.869347   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.157878   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.231975   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.363813   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.368870   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.658080   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.731664   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.864360   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.869216   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.061292   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:10.157795   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.231860   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.364769   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.370429   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.658505   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.731843   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.864135   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.868980   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.157877   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.232239   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.363484   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.369754   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.658782   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.731984   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.863505   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.869527   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.061712   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:12.158462   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.232103   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.363694   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.369301   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.657359   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.731938   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.864053   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.869069   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.158274   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.231385   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.363833   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.368652   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.658965   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.732675   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.928344   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.929349   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.126590   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:14.158685   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.231950   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.364126   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.369687   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.658882   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.732056   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.864375   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.869717   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.157974   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.231152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.363779   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.369426   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.658604   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.731601   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.864275   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.869916   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.157679   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.231859   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.363782   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.370004   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.560949   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:16.659138   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.731985   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.863521   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.869166   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.158240   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.231820   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.364241   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.369194   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.657588   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.731631   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.864242   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.869352   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.157779   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.231865   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.363735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.368596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.562919   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:18.658248   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.731383   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.863796   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.868999   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.158946   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.232270   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.429857   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.430878   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.658630   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.732391   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.863576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.927995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.158571   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.231888   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.364707   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.369435   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.657916   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.731517   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.864068   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.869267   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.062039   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:21.159582   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.231847   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.363970   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.369682   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.658699   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.731852   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.864149   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.868867   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.158408   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.231814   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.364321   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.369417   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.658561   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.732735   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.864300   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.869975   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.158230   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.231803   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.365186   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.369746   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.562198   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:23.658741   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.732191   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.863747   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.869822   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.158114   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.231429   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.364625   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.467157   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.657941   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.731151   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.863505   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.869527   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.158138   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.231699   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.364226   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.369059   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.657919   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.732029   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.863834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.868927   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.061262   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:26.158375   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.240365   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.363469   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.370020   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.659377   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.731833   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.864102   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.870509   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.158711   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.232659   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.364320   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.370886   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.658343   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.731632   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.866590   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.869865   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.062263   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:28.158194   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.231561   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.364217   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.369689   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.658417   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.731957   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.863687   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.870528   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.158173   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.231362   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.364379   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.369743   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.658576   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.731834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.864805   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.868903   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.158135   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.231229   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.364190   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.369760   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.561708   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:30.658670   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.758654   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.864170   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.869490   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.157733   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.232152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.363974   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.369198   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.659625   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.731963   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.864598   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.869838   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.162139   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.258720   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.364181   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.368861   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.658569   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.731893   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.863255   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.869166   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.061121   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:33.158482   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.231666   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.364217   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.369090   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.658449   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.731596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.863904   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.868970   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.157895   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.231405   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.364256   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.368867   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.658571   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.731952   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.863424   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.869475   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.061693   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:35.158561   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.231855   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.364596   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.369871   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.658265   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.759417   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.863903   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.868834   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.157859   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.231709   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.364201   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.369091   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.659032   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.731925   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.864678   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.869592   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.061914   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:37.157757   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.231700   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.364050   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.369076   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.657835   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.732104   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.863650   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.869538   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.157996   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.231238   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.363772   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.368570   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.658151   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.732565   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.863230   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.870209   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.062900   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:39.158864   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.231696   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.363676   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.370152   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.658416   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.738684   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.864253   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.869435   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.158438   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.231577   21995 kapi.go:107] duration metric: took 53.503544276s to wait for kubernetes.io/minikube-addons=registry ...
	I0814 16:11:40.364253   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.369210   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.658705   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.863759   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.868607   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.157843   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.363928   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.369405   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.561931   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:41.658193   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.863307   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.869331   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.158645   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.363683   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.369828   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.657899   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.864069   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.869369   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.158091   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.364071   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.369278   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.657502   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.863576   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.869567   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.062187   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:44.158474   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.363971   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.369336   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.657905   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.863636   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.869904   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.158501   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.363278   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.369272   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.657815   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.864606   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.869732   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.062490   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:46.159094   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.364209   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.369955   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.658303   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.930395   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.932197   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.228711   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.430956   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.431402   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.729259   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.929814   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.930780   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.131798   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:48.158092   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.364154   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.369810   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.659018   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.864237   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.869569   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.158780   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.363430   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.369408   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.657933   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.864837   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.869386   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.158702   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.363745   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.368830   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.560605   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:50.658178   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.863828   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.870373   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.158341   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.364353   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.369935   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.658971   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.864088   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.869271   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.158502   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.364239   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.369934   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.561616   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:52.659192   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.864751   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.868995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.158702   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.364438   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.370229   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.657885   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.930158   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.931942   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.230244   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.431387   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.432243   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.631850   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:54.727617   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.927654   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.931590   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.232368   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.435361   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.435995   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.829821   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.931945   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.933446   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.227455   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.364075   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.369724   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.657601   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.863617   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.870377   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.062005   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:57.158386   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.364180   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.369331   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.659507   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.863414   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.870035   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.159122   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.364208   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.368957   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.659764   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.864292   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.869723   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.158474   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.364773   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.370214   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.561684   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:59.657774   21995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.864193   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.869463   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.158144   21995 kapi.go:107] duration metric: took 1m13.504113712s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0814 16:12:00.364267   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.369550   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.937679   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.938089   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.363753   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.368783   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.864087   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.869454   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.061883   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:02.363726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.369958   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.864635   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.870336   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.363876   21995 kapi.go:107] duration metric: took 1m11.503396478s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0814 16:12:03.365838   21995 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-146898 cluster.
	I0814 16:12:03.367443   21995 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0814 16:12:03.368832   21995 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0814 16:12:03.369911   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.870445   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.130584   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:04.369821   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.871232   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.369866   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.870726   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.370058   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.562084   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:06.870391   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.369927   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.870724   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.369961   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.870310   21995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.061528   21995 pod_ready.go:102] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:09.369721   21995 kapi.go:107] duration metric: took 1m21.004381388s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0814 16:12:09.371436   21995 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, helm-tiller, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0814 16:12:09.372686   21995 addons.go:510] duration metric: took 1m28.827325195s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns helm-tiller metrics-server nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0814 16:12:10.062585   21995 pod_ready.go:92] pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:10.062607   21995 pod_ready.go:81] duration metric: took 1m8.50670987s for pod "metrics-server-8988944d9-79d8t" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.062619   21995 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.067106   21995 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:10.067132   21995 pod_ready.go:81] duration metric: took 4.506211ms for pod "nvidia-device-plugin-daemonset-c58zx" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:10.067162   21995 pod_ready.go:38] duration metric: took 1m10.512207597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:12:10.067188   21995 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:12:10.067220   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:10.067279   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:10.102031   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:10.102052   21995 cri.go:89] found id: ""
	I0814 16:12:10.102062   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:10.102114   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.105678   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:10.105743   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:10.139594   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:10.139621   21995 cri.go:89] found id: ""
	I0814 16:12:10.139631   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:10.139674   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.143005   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:10.143066   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:10.176153   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:10.176177   21995 cri.go:89] found id: ""
	I0814 16:12:10.176186   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:10.176227   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.179415   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:10.179488   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:10.214084   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:10.214139   21995 cri.go:89] found id: ""
	I0814 16:12:10.214147   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:10.214197   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.217498   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:10.217556   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:10.250777   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:10.250801   21995 cri.go:89] found id: ""
	I0814 16:12:10.250811   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:10.250860   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.254103   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:10.254150   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:10.287276   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:10.287294   21995 cri.go:89] found id: ""
	I0814 16:12:10.287301   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:10.287344   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.290548   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:10.290602   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:10.323421   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:10.323439   21995 cri.go:89] found id: ""
	I0814 16:12:10.323446   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:10.323494   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:10.326712   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:10.326737   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:10.399388   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:10.399424   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:10.439413   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:10.439450   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:10.471795   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:10.471823   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:10.509712   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:10.509742   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:10.588275   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:10.588310   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:10.629453   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:10.629482   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:10.641113   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:10.641139   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:10.737594   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:10.737623   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:10.782964   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:10.782996   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:10.826176   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:10.826212   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:10.885809   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:10.885844   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.440692   21995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:12:13.454238   21995 api_server.go:72] duration metric: took 1m32.908901224s to wait for apiserver process to appear ...
	I0814 16:12:13.454260   21995 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:12:13.454292   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:13.454330   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:13.486558   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:13.486581   21995 cri.go:89] found id: ""
	I0814 16:12:13.486591   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:13.486642   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.489914   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:13.489971   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:13.521892   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:13.521911   21995 cri.go:89] found id: ""
	I0814 16:12:13.521919   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:13.521960   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.525249   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:13.525299   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:13.557095   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:13.557116   21995 cri.go:89] found id: ""
	I0814 16:12:13.557123   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:13.557163   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.560338   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:13.560394   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:13.593724   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:13.593744   21995 cri.go:89] found id: ""
	I0814 16:12:13.593753   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:13.593803   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.596997   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:13.597081   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:13.629533   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:13.629557   21995 cri.go:89] found id: ""
	I0814 16:12:13.629566   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:13.629607   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.632773   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:13.632832   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:13.665627   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.665648   21995 cri.go:89] found id: ""
	I0814 16:12:13.665655   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:13.665698   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.669023   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:13.669102   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:13.702014   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:13.702038   21995 cri.go:89] found id: ""
	I0814 16:12:13.702047   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:13.702101   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:13.705328   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:13.705349   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:13.717108   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:13.717140   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:13.760034   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:13.760064   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:13.802392   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:13.802421   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:13.860725   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:13.860762   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:13.901341   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:13.901371   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:13.954318   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:13.954347   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:13.992414   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:13.992446   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:14.067680   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:14.067712   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:14.108330   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:14.108367   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:14.182895   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:14.182931   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:14.279220   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:14.279250   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:16.812490   21995 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 16:12:16.816079   21995 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 16:12:16.816866   21995 api_server.go:141] control plane version: v1.31.0
	I0814 16:12:16.816885   21995 api_server.go:131] duration metric: took 3.362619343s to wait for apiserver health ...
	I0814 16:12:16.816892   21995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:12:16.816917   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:16.816964   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:16.849723   21995 cri.go:89] found id: "191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:16.849745   21995 cri.go:89] found id: ""
	I0814 16:12:16.849755   21995 logs.go:276] 1 containers: [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c]
	I0814 16:12:16.849812   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.853014   21995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:16.853106   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:16.885299   21995 cri.go:89] found id: "dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:16.885325   21995 cri.go:89] found id: ""
	I0814 16:12:16.885335   21995 logs.go:276] 1 containers: [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f]
	I0814 16:12:16.885397   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.888561   21995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:16.888635   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:16.921191   21995 cri.go:89] found id: "246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:16.921209   21995 cri.go:89] found id: ""
	I0814 16:12:16.921216   21995 logs.go:276] 1 containers: [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b]
	I0814 16:12:16.921253   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.924673   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:16.924747   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:16.958966   21995 cri.go:89] found id: "5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:16.958983   21995 cri.go:89] found id: ""
	I0814 16:12:16.958990   21995 logs.go:276] 1 containers: [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285]
	I0814 16:12:16.959036   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.962336   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:16.962441   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:16.995206   21995 cri.go:89] found id: "adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:16.995235   21995 cri.go:89] found id: ""
	I0814 16:12:16.995246   21995 logs.go:276] 1 containers: [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945]
	I0814 16:12:16.995293   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:16.998777   21995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:16.998836   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:17.032374   21995 cri.go:89] found id: "3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:17.032404   21995 cri.go:89] found id: ""
	I0814 16:12:17.032414   21995 logs.go:276] 1 containers: [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190]
	I0814 16:12:17.032469   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:17.035699   21995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:17.035749   21995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:17.067892   21995 cri.go:89] found id: "8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:17.067917   21995 cri.go:89] found id: ""
	I0814 16:12:17.067925   21995 logs.go:276] 1 containers: [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7]
	I0814 16:12:17.067967   21995 ssh_runner.go:195] Run: which crictl
	I0814 16:12:17.071178   21995 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:17.071207   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:17.082929   21995 logs.go:123] Gathering logs for kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] ...
	I0814 16:12:17.082975   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c"
	I0814 16:12:17.125272   21995 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:17.125304   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 16:12:17.206438   21995 logs.go:123] Gathering logs for etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] ...
	I0814 16:12:17.206485   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f"
	I0814 16:12:17.250267   21995 logs.go:123] Gathering logs for coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] ...
	I0814 16:12:17.250302   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b"
	I0814 16:12:17.309858   21995 logs.go:123] Gathering logs for kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] ...
	I0814 16:12:17.309900   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285"
	I0814 16:12:17.348314   21995 logs.go:123] Gathering logs for kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] ...
	I0814 16:12:17.348346   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945"
	I0814 16:12:17.381056   21995 logs.go:123] Gathering logs for kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] ...
	I0814 16:12:17.381088   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190"
	I0814 16:12:17.434174   21995 logs.go:123] Gathering logs for kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] ...
	I0814 16:12:17.434212   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7"
	I0814 16:12:17.473752   21995 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:17.473783   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:17.549302   21995 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:17.549338   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:17.647991   21995 logs.go:123] Gathering logs for container status ...
	I0814 16:12:17.648018   21995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:20.198191   21995 system_pods.go:59] 19 kube-system pods found
	I0814 16:12:20.198231   21995 system_pods.go:61] "coredns-6f6b679f8f-rs8rx" [e1ff80e6-35f1-43e8-a10b-de57a706a45d] Running
	I0814 16:12:20.198236   21995 system_pods.go:61] "csi-hostpath-attacher-0" [eb22957e-40d4-46c5-ab19-a5f80dc49fe2] Running
	I0814 16:12:20.198239   21995 system_pods.go:61] "csi-hostpath-resizer-0" [b0b249da-1106-4481-a727-5d3dd4e9309e] Running
	I0814 16:12:20.198243   21995 system_pods.go:61] "csi-hostpathplugin-59ftp" [d8f46820-47d0-4d6a-882c-807b5a5b4203] Running
	I0814 16:12:20.198246   21995 system_pods.go:61] "etcd-addons-146898" [7a0c1724-2052-4a2e-842c-be916d45c6e8] Running
	I0814 16:12:20.198249   21995 system_pods.go:61] "kindnet-8q79t" [3f144cfd-ff50-4c02-a99d-01486262a254] Running
	I0814 16:12:20.198254   21995 system_pods.go:61] "kube-apiserver-addons-146898" [5192ebef-081d-44af-8efb-fe9694c28323] Running
	I0814 16:12:20.198257   21995 system_pods.go:61] "kube-controller-manager-addons-146898" [3e7df712-ed9d-4b18-b1b5-f73fda29bc48] Running
	I0814 16:12:20.198261   21995 system_pods.go:61] "kube-ingress-dns-minikube" [c9f18577-09a8-4168-a9ce-4c3dacaff132] Running
	I0814 16:12:20.198264   21995 system_pods.go:61] "kube-proxy-g8sfq" [cabf99db-c672-46bb-bb8e-f912b2e34db9] Running
	I0814 16:12:20.198267   21995 system_pods.go:61] "kube-scheduler-addons-146898" [51dea6b6-bb73-401d-8a0f-beb9adbfc01f] Running
	I0814 16:12:20.198270   21995 system_pods.go:61] "metrics-server-8988944d9-79d8t" [a144a102-aafb-4752-9784-1bdb16857bcd] Running
	I0814 16:12:20.198273   21995 system_pods.go:61] "nvidia-device-plugin-daemonset-c58zx" [203e32d0-800d-4b0e-acc3-caf43f35078e] Running
	I0814 16:12:20.198277   21995 system_pods.go:61] "registry-6fb4cdfc84-gwcbq" [6f24e44c-5e4f-4ef3-b21c-9950979c1e64] Running
	I0814 16:12:20.198282   21995 system_pods.go:61] "registry-proxy-dbmdb" [e307ed1d-1881-4d95-8ec9-361298af6c49] Running
	I0814 16:12:20.198288   21995 system_pods.go:61] "snapshot-controller-56fcc65765-47lvb" [432b350c-a8c3-4ac2-9061-b9c66e439297] Running
	I0814 16:12:20.198291   21995 system_pods.go:61] "snapshot-controller-56fcc65765-vfr28" [263c2d7c-3af6-41e8-97c4-7b3bcb707158] Running
	I0814 16:12:20.198298   21995 system_pods.go:61] "storage-provisioner" [07f9bb9e-3e12-4e4d-843a-a0e06de9d402] Running
	I0814 16:12:20.198301   21995 system_pods.go:61] "tiller-deploy-b48cc5f79-57b8n" [ab2aaa5f-4152-4d49-8a92-7653708c9955] Running
	I0814 16:12:20.198309   21995 system_pods.go:74] duration metric: took 3.381410419s to wait for pod list to return data ...
	I0814 16:12:20.198323   21995 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:12:20.200954   21995 default_sa.go:45] found service account: "default"
	I0814 16:12:20.200978   21995 default_sa.go:55] duration metric: took 2.648661ms for default service account to be created ...
	I0814 16:12:20.200987   21995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:12:20.208648   21995 system_pods.go:86] 19 kube-system pods found
	I0814 16:12:20.208677   21995 system_pods.go:89] "coredns-6f6b679f8f-rs8rx" [e1ff80e6-35f1-43e8-a10b-de57a706a45d] Running
	I0814 16:12:20.208683   21995 system_pods.go:89] "csi-hostpath-attacher-0" [eb22957e-40d4-46c5-ab19-a5f80dc49fe2] Running
	I0814 16:12:20.208687   21995 system_pods.go:89] "csi-hostpath-resizer-0" [b0b249da-1106-4481-a727-5d3dd4e9309e] Running
	I0814 16:12:20.208692   21995 system_pods.go:89] "csi-hostpathplugin-59ftp" [d8f46820-47d0-4d6a-882c-807b5a5b4203] Running
	I0814 16:12:20.208696   21995 system_pods.go:89] "etcd-addons-146898" [7a0c1724-2052-4a2e-842c-be916d45c6e8] Running
	I0814 16:12:20.208699   21995 system_pods.go:89] "kindnet-8q79t" [3f144cfd-ff50-4c02-a99d-01486262a254] Running
	I0814 16:12:20.208703   21995 system_pods.go:89] "kube-apiserver-addons-146898" [5192ebef-081d-44af-8efb-fe9694c28323] Running
	I0814 16:12:20.208708   21995 system_pods.go:89] "kube-controller-manager-addons-146898" [3e7df712-ed9d-4b18-b1b5-f73fda29bc48] Running
	I0814 16:12:20.208712   21995 system_pods.go:89] "kube-ingress-dns-minikube" [c9f18577-09a8-4168-a9ce-4c3dacaff132] Running
	I0814 16:12:20.208716   21995 system_pods.go:89] "kube-proxy-g8sfq" [cabf99db-c672-46bb-bb8e-f912b2e34db9] Running
	I0814 16:12:20.208720   21995 system_pods.go:89] "kube-scheduler-addons-146898" [51dea6b6-bb73-401d-8a0f-beb9adbfc01f] Running
	I0814 16:12:20.208726   21995 system_pods.go:89] "metrics-server-8988944d9-79d8t" [a144a102-aafb-4752-9784-1bdb16857bcd] Running
	I0814 16:12:20.208734   21995 system_pods.go:89] "nvidia-device-plugin-daemonset-c58zx" [203e32d0-800d-4b0e-acc3-caf43f35078e] Running
	I0814 16:12:20.208740   21995 system_pods.go:89] "registry-6fb4cdfc84-gwcbq" [6f24e44c-5e4f-4ef3-b21c-9950979c1e64] Running
	I0814 16:12:20.208747   21995 system_pods.go:89] "registry-proxy-dbmdb" [e307ed1d-1881-4d95-8ec9-361298af6c49] Running
	I0814 16:12:20.208751   21995 system_pods.go:89] "snapshot-controller-56fcc65765-47lvb" [432b350c-a8c3-4ac2-9061-b9c66e439297] Running
	I0814 16:12:20.208754   21995 system_pods.go:89] "snapshot-controller-56fcc65765-vfr28" [263c2d7c-3af6-41e8-97c4-7b3bcb707158] Running
	I0814 16:12:20.208758   21995 system_pods.go:89] "storage-provisioner" [07f9bb9e-3e12-4e4d-843a-a0e06de9d402] Running
	I0814 16:12:20.208762   21995 system_pods.go:89] "tiller-deploy-b48cc5f79-57b8n" [ab2aaa5f-4152-4d49-8a92-7653708c9955] Running
	I0814 16:12:20.208771   21995 system_pods.go:126] duration metric: took 7.778254ms to wait for k8s-apps to be running ...
	I0814 16:12:20.208779   21995 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:12:20.208822   21995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:12:20.219952   21995 system_svc.go:56] duration metric: took 11.164526ms WaitForService to wait for kubelet
	I0814 16:12:20.219978   21995 kubeadm.go:582] duration metric: took 1m39.674643236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:12:20.220003   21995 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:12:20.223057   21995 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0814 16:12:20.223081   21995 node_conditions.go:123] node cpu capacity is 8
	I0814 16:12:20.223094   21995 node_conditions.go:105] duration metric: took 3.08597ms to run NodePressure ...
	I0814 16:12:20.223105   21995 start.go:241] waiting for startup goroutines ...
	I0814 16:12:20.223111   21995 start.go:246] waiting for cluster config update ...
	I0814 16:12:20.223126   21995 start.go:255] writing updated cluster config ...
	I0814 16:12:20.227252   21995 ssh_runner.go:195] Run: rm -f paused
	I0814 16:12:20.275717   21995 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:12:20.361195   21995 out.go:177] * Done! kubectl is now configured to use "addons-146898" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.480151247Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-kkjnl from CNI network \"kindnet\" (type=ptp)"
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.522501438Z" level=info msg="Stopped pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=c174f24c-9674-4dfc-8473-11d93acdc7f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.822927612Z" level=info msg="Removing container: df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4" id=065b729f-1721-4cbf-a4f3-8ffcc9aa8efb name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:15:48 addons-146898 crio[1023]: time="2024-08-14 16:15:48.835658431Z" level=info msg="Removed container df6fb3b0abe807a05271d665da6fddfa9871a5747ebdc4200a0d4d800ca966e4: ingress-nginx/ingress-nginx-controller-7559cbf597-kkjnl/controller" id=065b729f-1721-4cbf-a4f3-8ffcc9aa8efb name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.688104784Z" level=info msg="Removing container: 804517e45859747c7a596f55c55d292d3a454a80e68af0d4b9945cdcf9132d46" id=d9db7755-6557-4607-b1f9-1d58ae5176f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.700735605Z" level=info msg="Removed container 804517e45859747c7a596f55c55d292d3a454a80e68af0d4b9945cdcf9132d46: ingress-nginx/ingress-nginx-admission-create-md956/create" id=d9db7755-6557-4607-b1f9-1d58ae5176f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.701954328Z" level=info msg="Removing container: 49f0e02bb01d2b5723f81a770c7de4fd26c5079413b30fc040719c89efa050b6" id=d893bea7-1241-419f-969c-7edea55e052c name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.715633893Z" level=info msg="Removed container 49f0e02bb01d2b5723f81a770c7de4fd26c5079413b30fc040719c89efa050b6: ingress-nginx/ingress-nginx-admission-patch-9gws6/patch" id=d893bea7-1241-419f-969c-7edea55e052c name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.716826974Z" level=info msg="Stopping pod sandbox: aa61e86c79550cd582923953b0d120e27a1580fe9aada3bdc61b5dd0c19f75f0" id=1fcae554-d4a6-4905-a35f-4719eda16da5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.716859582Z" level=info msg="Stopped pod sandbox (already stopped): aa61e86c79550cd582923953b0d120e27a1580fe9aada3bdc61b5dd0c19f75f0" id=1fcae554-d4a6-4905-a35f-4719eda16da5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.717072326Z" level=info msg="Removing pod sandbox: aa61e86c79550cd582923953b0d120e27a1580fe9aada3bdc61b5dd0c19f75f0" id=6f73aa5e-465d-4bdf-9f8f-00da85d8c036 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.725169314Z" level=info msg="Removed pod sandbox: aa61e86c79550cd582923953b0d120e27a1580fe9aada3bdc61b5dd0c19f75f0" id=6f73aa5e-465d-4bdf-9f8f-00da85d8c036 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.725583332Z" level=info msg="Stopping pod sandbox: 6a98d27efb2e9a08039553ab6d491800fa138e0b1f766c8a9320252a48721a6c" id=716baab5-d66d-4495-8287-ac491fa6ed34 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.725623225Z" level=info msg="Stopped pod sandbox (already stopped): 6a98d27efb2e9a08039553ab6d491800fa138e0b1f766c8a9320252a48721a6c" id=716baab5-d66d-4495-8287-ac491fa6ed34 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.725844276Z" level=info msg="Removing pod sandbox: 6a98d27efb2e9a08039553ab6d491800fa138e0b1f766c8a9320252a48721a6c" id=b412f1df-9c50-47eb-917c-1cf81f09a97a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.732529955Z" level=info msg="Removed pod sandbox: 6a98d27efb2e9a08039553ab6d491800fa138e0b1f766c8a9320252a48721a6c" id=b412f1df-9c50-47eb-917c-1cf81f09a97a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.732897862Z" level=info msg="Stopping pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=ea386b69-4a38-40b3-8e4e-2f077f26440e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.732922037Z" level=info msg="Stopped pod sandbox (already stopped): c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=ea386b69-4a38-40b3-8e4e-2f077f26440e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.733177343Z" level=info msg="Removing pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=b99e7183-6882-46d7-a67f-25ead4a7d1f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.739605318Z" level=info msg="Removed pod sandbox: c66d1bc8716819119758a85433fd0c6b72628666ff7c7cc6f8290e57dc034568" id=b99e7183-6882-46d7-a67f-25ead4a7d1f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.739939106Z" level=info msg="Stopping pod sandbox: 05d454b6c801df6824b88ad8a5c5fe81dbffc85cc7ba4e793a7094b445fcfc00" id=3807a4f4-7187-427f-88bc-a63387a8ebda name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.739972248Z" level=info msg="Stopped pod sandbox (already stopped): 05d454b6c801df6824b88ad8a5c5fe81dbffc85cc7ba4e793a7094b445fcfc00" id=3807a4f4-7187-427f-88bc-a63387a8ebda name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.740260267Z" level=info msg="Removing pod sandbox: 05d454b6c801df6824b88ad8a5c5fe81dbffc85cc7ba4e793a7094b445fcfc00" id=4ce65027-3a3e-4b3d-aff2-526c95108f38 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:16:35 addons-146898 crio[1023]: time="2024-08-14 16:16:35.746551277Z" level=info msg="Removed pod sandbox: 05d454b6c801df6824b88ad8a5c5fe81dbffc85cc7ba4e793a7094b445fcfc00" id=4ce65027-3a3e-4b3d-aff2-526c95108f38 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 14 16:18:06 addons-146898 crio[1023]: time="2024-08-14 16:18:06.492925536Z" level=info msg="Stopping container: 76e1492c01d8db3d2a73a1a72cdecdb5e3aa2c0b38e46783866e8579e2936261 (timeout: 30s)" id=7463eea9-66cc-4d3f-8c99-3949e53fffa8 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecbb3524a95e8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   67fd78e94f0ad       hello-world-app-55bf9c44b4-5q8tm
	581fb67114c20       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   edc5700a88fe4       nginx
	44643b6c6e90a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   1d2ce8cea40dd       busybox
	76e1492c01d8d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   a87704ad3890d       metrics-server-8988944d9-79d8t
	246a06bec9775       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   39be2e6da715d       coredns-6f6b679f8f-rs8rx
	b6661ad7ea490       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   3181cb7f40b62       storage-provisioner
	8ad4d9ab5f75c       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      7 minutes ago       Running             kindnet-cni               0                   6fa39e51d9eb8       kindnet-8q79t
	adf58724b3153       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   bcccd8f0036f4       kube-proxy-g8sfq
	191364a2b9cfc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   3242f826af48b       kube-apiserver-addons-146898
	dfeacce667a35       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   736a843486681       etcd-addons-146898
	3527b98c06c04       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   92939ee714511       kube-controller-manager-addons-146898
	5ee1b1bb3dede       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   a97758e9f9904       kube-scheduler-addons-146898
	
	
	==> coredns [246a06bec9775045e7ecaeec5a5f7cebfed861b75117df0bc752236184ae507b] <==
	[INFO] 10.244.0.2:48085 - 13206 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084193s
	[INFO] 10.244.0.2:35630 - 64345 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003697225s
	[INFO] 10.244.0.2:35630 - 43099 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00466186s
	[INFO] 10.244.0.2:45579 - 51002 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003848609s
	[INFO] 10.244.0.2:45579 - 20030 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003894453s
	[INFO] 10.244.0.2:37185 - 37776 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004016615s
	[INFO] 10.244.0.2:37185 - 23955 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004471266s
	[INFO] 10.244.0.2:42414 - 64049 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005434s
	[INFO] 10.244.0.2:42414 - 52021 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054625s
	[INFO] 10.244.0.21:43408 - 4470 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195774s
	[INFO] 10.244.0.21:38240 - 22188 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000292614s
	[INFO] 10.244.0.21:38014 - 55372 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015071s
	[INFO] 10.244.0.21:56407 - 15661 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096472s
	[INFO] 10.244.0.21:35096 - 13074 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067927s
	[INFO] 10.244.0.21:43337 - 46833 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128737s
	[INFO] 10.244.0.21:33752 - 10618 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007014459s
	[INFO] 10.244.0.21:41997 - 29691 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007564708s
	[INFO] 10.244.0.21:42835 - 32345 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00501955s
	[INFO] 10.244.0.21:40780 - 34365 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006231354s
	[INFO] 10.244.0.21:48421 - 50846 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004729351s
	[INFO] 10.244.0.21:35403 - 36661 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004800195s
	[INFO] 10.244.0.21:44808 - 58688 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000766863s
	[INFO] 10.244.0.21:56746 - 5068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000928719s
	[INFO] 10.244.0.25:51993 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022113s
	[INFO] 10.244.0.25:45872 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118603s
	
	
	==> describe nodes <==
	Name:               addons-146898
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-146898
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=addons-146898
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_10_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-146898
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:10:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-146898
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:16:12 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:16:12 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:16:12 +0000   Wed, 14 Aug 2024 16:10:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:16:12 +0000   Wed, 14 Aug 2024 16:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-146898
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 379be374f1c148e28523fa9e7f5e33ce
	  System UUID:                1a425e32-2dd3-4f11-8284-3396b217a9b8
	  Boot ID:                    01947443-31df-48f7-8446-7d38dbb2c026
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  default                     hello-world-app-55bf9c44b4-5q8tm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 coredns-6f6b679f8f-rs8rx                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m27s
	  kube-system                 etcd-addons-146898                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m33s
	  kube-system                 kindnet-8q79t                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m27s
	  kube-system                 kube-apiserver-addons-146898             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-controller-manager-addons-146898    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-g8sfq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-scheduler-addons-146898             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 metrics-server-8988944d9-79d8t           100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m22s                  kube-proxy       
	  Normal   Starting                 7m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m38s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m37s (x8 over 7m38s)  kubelet          Node addons-146898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m37s (x8 over 7m38s)  kubelet          Node addons-146898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m37s (x7 over 7m38s)  kubelet          Node addons-146898 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m32s                  kubelet          Node addons-146898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m32s                  kubelet          Node addons-146898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m32s                  kubelet          Node addons-146898 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m28s                  node-controller  Node addons-146898 event: Registered Node addons-146898 in Controller
	  Normal   NodeReady                7m8s                   kubelet          Node addons-146898 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000618] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.592364] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044662] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006688] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.012233] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003175] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015066] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.190736] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 16:13] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +1.011855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +2.015831] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +4.063648] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[  +8.191338] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[Aug14 16:14] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	[ +33.277375] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000036] ll header: 00000000: 16 5d 95 3b 4b fb 42 2c ab a1 ce 2f 08 00
	
	
	==> etcd [dfeacce667a3589f96e94dddc8e6ed1c91f527968f7ada18db013e2b34cedc6f] <==
	{"level":"warn","ts":"2024-08-14T16:10:43.638299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.283437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-14T16:10:43.638391Z","caller":"traceutil/trace.go:171","msg":"trace[385103287] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:422; }","duration":"105.384175ms","start":"2024-08-14T16:10:43.532996Z","end":"2024-08-14T16:10:43.638380Z","steps":["trace[385103287] 'agreement among raft nodes before linearized reading'  (duration: 105.253588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:43.638611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.515054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:10:43.638674Z","caller":"traceutil/trace.go:171","msg":"trace[183833095] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:422; }","duration":"105.581567ms","start":"2024-08-14T16:10:43.533084Z","end":"2024-08-14T16:10:43.638665Z","steps":["trace[183833095] 'agreement among raft nodes before linearized reading'  (duration: 105.499536ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:43.639426Z","caller":"traceutil/trace.go:171","msg":"trace[606312538] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"106.586291ms","start":"2024-08-14T16:10:43.532830Z","end":"2024-08-14T16:10:43.639416Z","steps":["trace[606312538] 'process raft request'  (duration: 102.954708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.638667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.830239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031207388838662 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/registry-proxy-5787bf5f6d\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/registry-proxy-5787bf5f6d\" value_size:2702 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T16:10:44.642717Z","caller":"traceutil/trace.go:171","msg":"trace[1593354808] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"107.466381ms","start":"2024-08-14T16:10:44.535237Z","end":"2024-08-14T16:10:44.642703Z","steps":["trace[1593354808] 'compare'  (duration: 100.712981ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:44.642906Z","caller":"traceutil/trace.go:171","msg":"trace[1493422766] linearizableReadLoop","detail":"{readStateIndex:497; appliedIndex:496; }","duration":"106.738369ms","start":"2024-08-14T16:10:44.536157Z","end":"2024-08-14T16:10:44.642896Z","steps":["trace[1493422766] 'read index received'  (duration: 926.843µs)","trace[1493422766] 'applied index is now lower than readState.Index'  (duration: 105.810593ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:10:44.643157Z","caller":"traceutil/trace.go:171","msg":"trace[1511698523] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"100.458441ms","start":"2024-08-14T16:10:44.542690Z","end":"2024-08-14T16:10:44.643148Z","steps":["trace[1511698523] 'process raft request'  (duration: 98.98844ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.643391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.864182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2024-08-14T16:10:44.643451Z","caller":"traceutil/trace.go:171","msg":"trace[2034288172] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:488; }","duration":"107.932541ms","start":"2024-08-14T16:10:44.535509Z","end":"2024-08-14T16:10:44.643442Z","steps":["trace[2034288172] 'agreement among raft nodes before linearized reading'  (duration: 107.840083ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:10:44.741762Z","caller":"traceutil/trace.go:171","msg":"trace[495505027] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"108.103234ms","start":"2024-08-14T16:10:44.633637Z","end":"2024-08-14T16:10:44.741740Z","steps":["trace[495505027] 'process raft request'  (duration: 105.566058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.742294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.708523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:10:44.743840Z","caller":"traceutil/trace.go:171","msg":"trace[1703353517] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:0; response_revision:490; }","duration":"116.257376ms","start":"2024-08-14T16:10:44.627564Z","end":"2024-08-14T16:10:44.743821Z","steps":["trace[1703353517] 'agreement among raft nodes before linearized reading'  (duration: 114.651401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.744115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.67652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-08-14T16:10:44.747716Z","caller":"traceutil/trace.go:171","msg":"trace[675380562] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:490; }","duration":"112.294597ms","start":"2024-08-14T16:10:44.635411Z","end":"2024-08-14T16:10:44.747706Z","steps":["trace[675380562] 'agreement among raft nodes before linearized reading'  (duration: 108.645368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.747665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.287281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-14T16:10:44.747824Z","caller":"traceutil/trace.go:171","msg":"trace[328019672] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:497; }","duration":"200.452122ms","start":"2024-08-14T16:10:44.547365Z","end":"2024-08-14T16:10:44.747817Z","steps":["trace[328019672] 'agreement among raft nodes before linearized reading'  (duration: 200.268025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:10:44.827499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.342884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-146898\" ","response":"range_response_count:1 size:5648"}
	{"level":"info","ts":"2024-08-14T16:10:44.827579Z","caller":"traceutil/trace.go:171","msg":"trace[2027603397] range","detail":"{range_begin:/registry/minions/addons-146898; range_end:; response_count:1; response_revision:497; }","duration":"182.435642ms","start":"2024-08-14T16:10:44.645129Z","end":"2024-08-14T16:10:44.827565Z","steps":["trace[2027603397] 'agreement among raft nodes before linearized reading'  (duration: 182.129973ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:31.671712Z","caller":"traceutil/trace.go:171","msg":"trace[1003861272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1331; }","duration":"107.457063ms","start":"2024-08-14T16:12:31.564234Z","end":"2024-08-14T16:12:31.671691Z","steps":["trace[1003861272] 'process raft request'  (duration: 56.420267ms)","trace[1003861272] 'compare'  (duration: 50.928112ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:12:31.870536Z","caller":"traceutil/trace.go:171","msg":"trace[514024397] transaction","detail":"{read_only:false; response_revision:1333; number_of_response:1; }","duration":"190.240255ms","start":"2024-08-14T16:12:31.680274Z","end":"2024-08-14T16:12:31.870514Z","steps":["trace[514024397] 'process raft request'  (duration: 189.595525ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:31.870497Z","caller":"traceutil/trace.go:171","msg":"trace[1431956324] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"129.371359ms","start":"2024-08-14T16:12:31.741109Z","end":"2024-08-14T16:12:31.870480Z","steps":["trace[1431956324] 'read index received'  (duration: 128.757359ms)","trace[1431956324] 'applied index is now lower than readState.Index'  (duration: 613.288µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:12:31.870657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.52788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:12:31.870692Z","caller":"traceutil/trace.go:171","msg":"trace[1688816681] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:1333; }","duration":"129.583563ms","start":"2024-08-14T16:12:31.741101Z","end":"2024-08-14T16:12:31.870684Z","steps":["trace[1688816681] 'agreement among raft nodes before linearized reading'  (duration: 129.498148ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:18:07 up  1:00,  0 users,  load average: 0.53, 0.37, 0.23
	Linux addons-146898 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8ad4d9ab5f75c0dd637f90ee63c4d9b1096ee43d9c1c041a3f16857b80c40bb7] <==
	E0814 16:16:49.380123       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0814 16:16:59.026815       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:16:59.026864       1 main.go:299] handling current node
	I0814 16:17:09.026453       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:09.026489       1 main.go:299] handling current node
	W0814 16:17:16.522875       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0814 16:17:16.522932       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0814 16:17:19.026201       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:19.026238       1 main.go:299] handling current node
	I0814 16:17:29.026559       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:29.026593       1 main.go:299] handling current node
	W0814 16:17:30.527540       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:17:30.527571       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0814 16:17:35.730415       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:17:35.730445       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0814 16:17:39.025798       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:39.025829       1 main.go:299] handling current node
	I0814 16:17:49.026246       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:49.026285       1 main.go:299] handling current node
	I0814 16:17:59.025918       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0814 16:17:59.025958       1 main.go:299] handling current node
	W0814 16:18:02.106555       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0814 16:18:02.106592       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0814 16:18:02.890077       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:18:02.890119       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [191364a2b9cfc88c966b6ad5aae597b680ce2f48d4990e33b3e44701ba05335c] <==
	I0814 16:12:09.787221       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0814 16:12:30.868587       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47740: use of closed network connection
	E0814 16:12:31.027539       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47762: use of closed network connection
	I0814 16:12:51.088344       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.42.237"}
	E0814 16:13:03.980926       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:43102: read: connection reset by peer
	E0814 16:13:07.177993       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0814 16:13:17.210544       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0814 16:13:18.229359       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0814 16:13:18.482104       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0814 16:13:22.763177       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0814 16:13:23.032966       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.242.205"}
	I0814 16:13:37.765451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.765501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.778273       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.778403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.794591       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.794648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.829342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.829499       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:13:37.880567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:13:37.880614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0814 16:13:38.830203       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0814 16:13:38.880636       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0814 16:13:38.938900       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0814 16:15:43.839092       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.25.34"}
	
	
	==> kube-controller-manager [3527b98c06c0457a34494dce5c812267f4920f0a2e3670a153f3082cd3ff9190] <==
	E0814 16:16:00.026558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:16:12.784905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-146898"
	W0814 16:16:25.461699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:25.461740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:16:27.226382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:27.226433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:16:41.348236       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:41.348277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:16:46.775149       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:46.775203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:16:57.486946       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:57.486985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:07.390396       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:07.390431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:11.947804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:11.947854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:45.893899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:45.893944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:49.957133       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:49.957173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:53.525564       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:53.525608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:18:04.057431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:18:04.057474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:18:06.482601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="9.537µs"
	
	
	==> kube-proxy [adf58724b31536bfc4bd2d95f3e1350f9577cbae6ea7da14f7787245ede4f945] <==
	I0814 16:10:43.227024       1 server_linux.go:66] "Using iptables proxy"
	I0814 16:10:44.042249       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0814 16:10:44.044518       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:10:44.547086       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0814 16:10:44.547866       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:10:44.735166       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:10:44.735940       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:10:44.736020       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:10:44.837599       1 config.go:197] "Starting service config controller"
	I0814 16:10:44.842877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:10:44.840008       1 config.go:326] "Starting node config controller"
	I0814 16:10:44.843068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:10:44.839635       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:10:44.843146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:10:44.948728       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:10:44.948910       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:10:44.949083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ee1b1bb3dede1a0d44516bf49aecf693802168d62a7004f11eaae49131ce285] <==
	W0814 16:10:32.856265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0814 16:10:32.856269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:10:32.856283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0814 16:10:32.856285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:32.856226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:10:32.856307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.755674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.755720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.783564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:10:33.783603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.801227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:10:33.801262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.853752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:10:33.853795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.900742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.900777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.908027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:10:33.908061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.926270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.926313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.946693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 16:10:33.946742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:33.950999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:33.951035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 16:10:34.354489       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 16:16:45 addons-146898 kubelet[1651]: E0814 16:16:45.605123    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652205604832000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:16:55 addons-146898 kubelet[1651]: E0814 16:16:55.606959    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652215606770584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:16:55 addons-146898 kubelet[1651]: E0814 16:16:55.606992    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652215606770584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:05 addons-146898 kubelet[1651]: E0814 16:17:05.609045    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652225608835285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:05 addons-146898 kubelet[1651]: E0814 16:17:05.609088    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652225608835285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:15 addons-146898 kubelet[1651]: E0814 16:17:15.611160    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652235610930421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:15 addons-146898 kubelet[1651]: E0814 16:17:15.611201    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652235610930421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:25 addons-146898 kubelet[1651]: E0814 16:17:25.614156    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652245613900364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:25 addons-146898 kubelet[1651]: E0814 16:17:25.614194    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652245613900364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:35 addons-146898 kubelet[1651]: E0814 16:17:35.616412    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652255616179638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:35 addons-146898 kubelet[1651]: E0814 16:17:35.616444    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652255616179638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:45 addons-146898 kubelet[1651]: E0814 16:17:45.618918    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652265618703207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:45 addons-146898 kubelet[1651]: E0814 16:17:45.618949    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652265618703207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:55 addons-146898 kubelet[1651]: E0814 16:17:55.622059    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652275621806746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:55 addons-146898 kubelet[1651]: E0814 16:17:55.622097    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652275621806746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:05 addons-146898 kubelet[1651]: I0814 16:18:05.427238    1651 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 16:18:05 addons-146898 kubelet[1651]: E0814 16:18:05.624445    1651 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652285624225278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:05 addons-146898 kubelet[1651]: E0814 16:18:05.624482    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652285624225278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613263,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:06 addons-146898 kubelet[1651]: I0814 16:18:06.492042    1651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-5q8tm" podStartSLOduration=140.252795861 podStartE2EDuration="2m23.492021818s" podCreationTimestamp="2024-08-14 16:15:43 +0000 UTC" firstStartedPulling="2024-08-14 16:15:43.978305758 +0000 UTC m=+308.651063929" lastFinishedPulling="2024-08-14 16:15:47.217531725 +0000 UTC m=+311.890289886" observedRunningTime="2024-08-14 16:15:47.826493554 +0000 UTC m=+312.499251734" watchObservedRunningTime="2024-08-14 16:18:06.492021818 +0000 UTC m=+451.164780048"
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.803619    1651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr8qd\" (UniqueName: \"kubernetes.io/projected/a144a102-aafb-4752-9784-1bdb16857bcd-kube-api-access-mr8qd\") pod \"a144a102-aafb-4752-9784-1bdb16857bcd\" (UID: \"a144a102-aafb-4752-9784-1bdb16857bcd\") "
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.803691    1651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a144a102-aafb-4752-9784-1bdb16857bcd-tmp-dir\") pod \"a144a102-aafb-4752-9784-1bdb16857bcd\" (UID: \"a144a102-aafb-4752-9784-1bdb16857bcd\") "
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.803963    1651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a144a102-aafb-4752-9784-1bdb16857bcd-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a144a102-aafb-4752-9784-1bdb16857bcd" (UID: "a144a102-aafb-4752-9784-1bdb16857bcd"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.805279    1651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a144a102-aafb-4752-9784-1bdb16857bcd-kube-api-access-mr8qd" (OuterVolumeSpecName: "kube-api-access-mr8qd") pod "a144a102-aafb-4752-9784-1bdb16857bcd" (UID: "a144a102-aafb-4752-9784-1bdb16857bcd"). InnerVolumeSpecName "kube-api-access-mr8qd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.904188    1651 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a144a102-aafb-4752-9784-1bdb16857bcd-tmp-dir\") on node \"addons-146898\" DevicePath \"\""
	Aug 14 16:18:07 addons-146898 kubelet[1651]: I0814 16:18:07.904227    1651 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mr8qd\" (UniqueName: \"kubernetes.io/projected/a144a102-aafb-4752-9784-1bdb16857bcd-kube-api-access-mr8qd\") on node \"addons-146898\" DevicePath \"\""
	
	
	==> storage-provisioner [b6661ad7ea490fad210b2513ffa64647cb6ca22e8e580e1f1ed4a268f425110b] <==
	I0814 16:11:00.165855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:11:00.174910       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:11:00.174960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:11:00.182608       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:11:00.182782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2!
	I0814 16:11:00.183220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bcf1568-4c44-443f-a1cf-b3444b909576", APIVersion:"v1", ResourceVersion:"939", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2 became leader
	I0814 16:11:00.283920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-146898_19ee1e90-1b64-4533-80fe-123fc1837ab2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-146898 -n addons-146898
helpers_test.go:261: (dbg) Run:  kubectl --context addons-146898 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (302.53s)

                                                
                                    

Test pass (301/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 16.44
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.99
18 TestDownloadOnly/v1.31.0/DeleteAll 0.39
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.05
21 TestBinaryMirror 0.75
22 TestOffline 58.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 141.16
31 TestAddons/serial/GCPAuth/Namespaces 0.15
33 TestAddons/parallel/Registry 14.89
35 TestAddons/parallel/InspektorGadget 10.74
37 TestAddons/parallel/HelmTiller 11.77
39 TestAddons/parallel/CSI 47.05
40 TestAddons/parallel/Headlamp 21.56
41 TestAddons/parallel/CloudSpanner 5.47
42 TestAddons/parallel/LocalPath 54.95
43 TestAddons/parallel/NvidiaDevicePlugin 5.46
44 TestAddons/parallel/Yakd 11.79
45 TestAddons/StoppedEnableDisable 12.05
46 TestCertOptions 24.68
47 TestCertExpiration 221.49
49 TestForceSystemdFlag 24.03
50 TestForceSystemdEnv 36.42
52 TestKVMDriverInstallOrUpdate 4.44
56 TestErrorSpam/setup 20.21
57 TestErrorSpam/start 0.55
58 TestErrorSpam/status 0.84
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.55
61 TestErrorSpam/stop 1.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 41.87
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.95
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.92
73 TestFunctional/serial/CacheCmd/cache/add_local 2.13
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 38.53
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.33
85 TestFunctional/serial/InvalidService 3.96
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 10.07
89 TestFunctional/parallel/DryRun 0.32
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.86
95 TestFunctional/parallel/ServiceCmdConnect 9.82
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 43.82
99 TestFunctional/parallel/SSHCmd 0.6
100 TestFunctional/parallel/CpCmd 2.05
101 TestFunctional/parallel/MySQL 23.49
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 2.03
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
111 TestFunctional/parallel/License 0.62
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
116 TestFunctional/parallel/ImageCommands/ImageBuild 4.33
117 TestFunctional/parallel/ImageCommands/Setup 1.96
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.61
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.34
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.07
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.27
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.98
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.14
133 TestFunctional/parallel/ServiceCmd/List 0.87
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
135 TestFunctional/parallel/ProfileCmd/profile_list 0.35
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
138 TestFunctional/parallel/MountCmd/any-port 8.52
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
140 TestFunctional/parallel/ServiceCmd/Format 0.53
141 TestFunctional/parallel/ServiceCmd/URL 0.52
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/MountCmd/specific-port 1.84
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 101.02
160 TestMultiControlPlane/serial/DeployApp 5.28
161 TestMultiControlPlane/serial/PingHostFromPods 0.99
162 TestMultiControlPlane/serial/AddWorkerNode 36.47
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.63
165 TestMultiControlPlane/serial/CopyFile 15.48
166 TestMultiControlPlane/serial/StopSecondaryNode 12.49
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
168 TestMultiControlPlane/serial/RestartSecondaryNode 48.52
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.64
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 188.55
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.32
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.46
173 TestMultiControlPlane/serial/StopCluster 35.51
174 TestMultiControlPlane/serial/RestartCluster 115.46
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.46
176 TestMultiControlPlane/serial/AddSecondaryNode 42.43
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.63
181 TestJSONOutput/start/Command 44.64
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.78
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
206 TestKicCustomNetwork/create_custom_network 34.31
207 TestKicCustomNetwork/use_default_bridge_network 26.61
208 TestKicExistingNetwork 25.6
209 TestKicCustomSubnet 23.15
210 TestKicStaticIP 22.61
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 52.89
215 TestMountStart/serial/StartWithMountFirst 5.88
216 TestMountStart/serial/VerifyMountFirst 0.24
217 TestMountStart/serial/StartWithMountSecond 9.17
218 TestMountStart/serial/VerifyMountSecond 0.24
219 TestMountStart/serial/DeleteFirst 1.6
220 TestMountStart/serial/VerifyMountPostDelete 0.24
221 TestMountStart/serial/Stop 1.17
222 TestMountStart/serial/RestartStopped 7.88
223 TestMountStart/serial/VerifyMountPostStop 0.24
226 TestMultiNode/serial/FreshStart2Nodes 70.38
227 TestMultiNode/serial/DeployApp2Nodes 5.09
228 TestMultiNode/serial/PingHostFrom2Pods 0.69
229 TestMultiNode/serial/AddNode 29.85
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.29
232 TestMultiNode/serial/CopyFile 8.82
233 TestMultiNode/serial/StopNode 2.07
234 TestMultiNode/serial/StartAfterStop 9.01
235 TestMultiNode/serial/RestartKeepsNodes 77.75
236 TestMultiNode/serial/DeleteNode 4.94
237 TestMultiNode/serial/StopMultiNode 23.68
238 TestMultiNode/serial/RestartMultiNode 54.68
239 TestMultiNode/serial/ValidateNameConflict 25.58
244 TestPreload 116.94
246 TestScheduledStopUnix 96.16
249 TestInsufficientStorage 12.49
250 TestRunningBinaryUpgrade 61.73
252 TestKubernetesUpgrade 346.81
253 TestMissingContainerUpgrade 175.17
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestNoKubernetes/serial/StartWithK8s 32.13
264 TestNetworkPlugins/group/false 7.81
268 TestNoKubernetes/serial/StartWithStopK8s 26.42
269 TestStoppedBinaryUpgrade/Setup 2.21
270 TestNoKubernetes/serial/Start 9.03
271 TestStoppedBinaryUpgrade/Upgrade 86.25
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
273 TestNoKubernetes/serial/ProfileList 7.3
274 TestNoKubernetes/serial/Stop 1.19
275 TestNoKubernetes/serial/StartNoArgs 6.99
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
286 TestPause/serial/Start 46.28
287 TestPause/serial/SecondStartNoReconfiguration 21.68
288 TestPause/serial/Pause 0.67
289 TestPause/serial/VerifyStatus 0.29
290 TestPause/serial/Unpause 0.62
291 TestPause/serial/PauseAgain 0.76
292 TestPause/serial/DeletePaused 2.73
293 TestPause/serial/VerifyDeletedResources 15.34
294 TestNetworkPlugins/group/auto/Start 43.18
295 TestNetworkPlugins/group/kindnet/Start 43.61
296 TestNetworkPlugins/group/auto/KubeletFlags 0.27
297 TestNetworkPlugins/group/auto/NetCatPod 10.21
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/auto/DNS 0.12
300 TestNetworkPlugins/group/auto/Localhost 0.1
301 TestNetworkPlugins/group/auto/HairPin 0.1
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
304 TestNetworkPlugins/group/kindnet/DNS 0.15
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.12
307 TestNetworkPlugins/group/calico/Start 56.06
308 TestNetworkPlugins/group/custom-flannel/Start 52.44
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.27
311 TestNetworkPlugins/group/calico/NetCatPod 9.19
312 TestNetworkPlugins/group/enable-default-cni/Start 63.53
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
315 TestNetworkPlugins/group/calico/DNS 0.2
316 TestNetworkPlugins/group/calico/Localhost 0.18
317 TestNetworkPlugins/group/calico/HairPin 0.18
318 TestNetworkPlugins/group/flannel/Start 55.1
319 TestNetworkPlugins/group/custom-flannel/DNS 0.16
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
322 TestNetworkPlugins/group/bridge/Start 72.92
324 TestStartStop/group/old-k8s-version/serial/FirstStart 141.89
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
329 TestNetworkPlugins/group/flannel/NetCatPod 9.21
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
333 TestNetworkPlugins/group/flannel/DNS 0.13
334 TestNetworkPlugins/group/flannel/Localhost 0.11
335 TestNetworkPlugins/group/flannel/HairPin 0.12
337 TestStartStop/group/no-preload/serial/FirstStart 62.72
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.82
341 TestNetworkPlugins/group/bridge/NetCatPod 9.22
342 TestNetworkPlugins/group/bridge/DNS 0.12
343 TestNetworkPlugins/group/bridge/Localhost 0.1
344 TestNetworkPlugins/group/bridge/HairPin 0.1
346 TestStartStop/group/newest-cni/serial/FirstStart 29.3
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
348 TestStartStop/group/no-preload/serial/DeployApp 10.24
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.89
353 TestStartStop/group/newest-cni/serial/Stop 1.19
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
355 TestStartStop/group/newest-cni/serial/SecondStart 12.65
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
357 TestStartStop/group/no-preload/serial/Stop 11.96
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 278.83
360 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
363 TestStartStop/group/newest-cni/serial/Pause 2.76
364 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/no-preload/serial/SecondStart 301
368 TestStartStop/group/embed-certs/serial/FirstStart 48.81
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
370 TestStartStop/group/old-k8s-version/serial/Stop 13.42
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/old-k8s-version/serial/SecondStart 28.26
373 TestStartStop/group/embed-certs/serial/DeployApp 10.25
374 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 27
375 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
376 TestStartStop/group/embed-certs/serial/Stop 12.99
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
378 TestStartStop/group/embed-certs/serial/SecondStart 262.38
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/old-k8s-version/serial/Pause 2.81
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
386 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
389 TestStartStop/group/no-preload/serial/Pause 2.59
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
393 TestStartStop/group/embed-certs/serial/Pause 2.62
x
+
TestDownloadOnly/v1.20.0/json-events (19.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-351385 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-351385 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.267649761s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-351385
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-351385: exit status 85 (55.898869ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-351385 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |          |
	|         | -p download-only-351385        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:19.640320   20611 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:19.640445   20611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:19.640455   20611 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:19.640461   20611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:19.640647   20611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	W0814 16:09:19.640779   20611 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19446-13813/.minikube/config/config.json: open /home/jenkins/minikube-integration/19446-13813/.minikube/config/config.json: no such file or directory
	I0814 16:09:19.641397   20611 out.go:298] Setting JSON to true
	I0814 16:09:19.642267   20611 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3104,"bootTime":1723648656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:19.642326   20611 start.go:139] virtualization: kvm guest
	I0814 16:09:19.644572   20611 out.go:97] [download-only-351385] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0814 16:09:19.644663   20611 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball: no such file or directory
	I0814 16:09:19.644687   20611 notify.go:220] Checking for updates...
	I0814 16:09:19.646235   20611 out.go:169] MINIKUBE_LOCATION=19446
	I0814 16:09:19.647842   20611 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:19.649243   20611 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:09:19.650532   20611 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:09:19.651779   20611 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0814 16:09:19.654414   20611 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 16:09:19.654649   20611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:19.675931   20611 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:09:19.676035   20611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:20.049256   20611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-14 16:09:20.040671055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:20.049360   20611 docker.go:307] overlay module found
	I0814 16:09:20.050886   20611 out.go:97] Using the docker driver based on user configuration
	I0814 16:09:20.050907   20611 start.go:297] selected driver: docker
	I0814 16:09:20.050911   20611 start.go:901] validating driver "docker" against <nil>
	I0814 16:09:20.050983   20611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:20.095854   20611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-14 16:09:20.087462033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:20.095995   20611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:20.096500   20611 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0814 16:09:20.096652   20611 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 16:09:20.098424   20611 out.go:169] Using Docker driver with root privileges
	I0814 16:09:20.099698   20611 cni.go:84] Creating CNI manager for ""
	I0814 16:09:20.099712   20611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:09:20.099723   20611 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:20.099781   20611 start.go:340] cluster config:
	{Name:download-only-351385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-351385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:20.101132   20611 out.go:97] Starting "download-only-351385" primary control-plane node in "download-only-351385" cluster
	I0814 16:09:20.101146   20611 cache.go:121] Beginning downloading kic base image for docker with crio
	I0814 16:09:20.102384   20611 out.go:97] Pulling base image v0.0.44-1723567951-19429 ...
	I0814 16:09:20.102412   20611 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 16:09:20.102533   20611 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 16:09:20.117974   20611 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 16:09:20.118166   20611 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 16:09:20.118257   20611 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 16:09:20.207814   20611 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:20.207843   20611 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:20.207987   20611 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 16:09:20.210192   20611 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0814 16:09:20.210216   20611 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0814 16:09:20.323320   20611 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:34.452950   20611 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	
	
	* The control-plane node download-only-351385 host does not exist
	  To start a cluster, run: "minikube start -p download-only-351385"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-351385
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (16.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-707278 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-707278 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.436184131s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (16.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-707278
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-707278: exit status 85 (994.213802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-351385 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | -p download-only-351385        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| delete  | -p download-only-351385        | download-only-351385 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| start   | -o=json --download-only        | download-only-707278 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | -p download-only-707278        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:39.283629   21011 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:39.283862   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:39.283870   21011 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:39.283875   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:39.284036   21011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:09:39.284578   21011 out.go:298] Setting JSON to true
	I0814 16:09:39.285442   21011 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3123,"bootTime":1723648656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:39.285503   21011 start.go:139] virtualization: kvm guest
	I0814 16:09:39.287788   21011 out.go:97] [download-only-707278] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:09:39.287892   21011 notify.go:220] Checking for updates...
	I0814 16:09:39.289385   21011 out.go:169] MINIKUBE_LOCATION=19446
	I0814 16:09:39.290879   21011 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:39.292286   21011 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:09:39.293578   21011 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:09:39.294791   21011 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0814 16:09:39.297327   21011 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 16:09:39.297529   21011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:39.318686   21011 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:09:39.318762   21011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:39.366022   21011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:39.357085409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:39.366132   21011 docker.go:307] overlay module found
	I0814 16:09:39.367820   21011 out.go:97] Using the docker driver based on user configuration
	I0814 16:09:39.367838   21011 start.go:297] selected driver: docker
	I0814 16:09:39.367844   21011 start.go:901] validating driver "docker" against <nil>
	I0814 16:09:39.367916   21011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:09:39.415150   21011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-14 16:09:39.406448771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:09:39.415302   21011 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:39.415789   21011 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0814 16:09:39.415924   21011 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 16:09:39.417821   21011 out.go:169] Using Docker driver with root privileges
	I0814 16:09:39.419008   21011 cni.go:84] Creating CNI manager for ""
	I0814 16:09:39.419030   21011 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0814 16:09:39.419040   21011 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:39.419107   21011 start.go:340] cluster config:
	{Name:download-only-707278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-707278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:39.420354   21011 out.go:97] Starting "download-only-707278" primary control-plane node in "download-only-707278" cluster
	I0814 16:09:39.420372   21011 cache.go:121] Beginning downloading kic base image for docker with crio
	I0814 16:09:39.421689   21011 out.go:97] Pulling base image v0.0.44-1723567951-19429 ...
	I0814 16:09:39.421710   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:39.421757   21011 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local docker daemon
	I0814 16:09:39.436608   21011 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 to local cache
	I0814 16:09:39.436746   21011 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory
	I0814 16:09:39.436764   21011 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 in local cache directory, skipping pull
	I0814 16:09:39.436771   21011 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 exists in cache, skipping pull
	I0814 16:09:39.436781   21011 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 as a tarball
	I0814 16:09:39.533924   21011 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:39.533952   21011 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:39.534119   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:39.536066   21011 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0814 16:09:39.536084   21011 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0814 16:09:39.649994   21011 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:53.790987   21011 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0814 16:09:53.791078   21011 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19446-13813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-707278 host does not exist
	  To start a cluster, run: "minikube start -p download-only-707278"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-707278
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-996390 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-996390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-996390
--- PASS: TestDownloadOnlyKic (1.05s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-011666 --alsologtostderr --binary-mirror http://127.0.0.1:38739 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-011666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-011666
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (58.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-113387 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-113387 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (50.187206448s)
helpers_test.go:175: Cleaning up "offline-crio-113387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-113387
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-113387: (8.244276938s)
--- PASS: TestOffline (58.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-146898
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-146898: exit status 85 (47.160197ms)

                                                
                                                
-- stdout --
	* Profile "addons-146898" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-146898"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-146898
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-146898: exit status 85 (47.997478ms)

                                                
                                                
-- stdout --
	* Profile "addons-146898" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-146898"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (141.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-146898 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-146898 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m21.16324958s)
--- PASS: TestAddons/Setup (141.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-146898 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-146898 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.911011ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-gwcbq" [6f24e44c-5e4f-4ef3-b21c-9950979c1e64] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005469765s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dbmdb" [e307ed1d-1881-4d95-8ec9-361298af6c49] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00296221s
addons_test.go:342: (dbg) Run:  kubectl --context addons-146898 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-146898 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-146898 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.008648701s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 ip
2024/08/14 16:12:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-txh95" [e5f7ddcd-7411-4bf9-b3c9-9d6d73ce228a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003553987s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-146898
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-146898: (5.733184786s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 64.899935ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-57b8n" [ab2aaa5f-4152-4d49-8a92-7653708c9955] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003338248s
addons_test.go:475: (dbg) Run:  kubectl --context addons-146898 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-146898 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.210501342s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.232894ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-146898 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-146898 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a7c2c480-c833-4192-8d39-a34fccee2abf] Pending
helpers_test.go:344: "task-pv-pod" [a7c2c480-c833-4192-8d39-a34fccee2abf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a7c2c480-c833-4192-8d39-a34fccee2abf] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00252051s
addons_test.go:590: (dbg) Run:  kubectl --context addons-146898 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-146898 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-146898 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-146898 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-146898 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-146898 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-146898 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eee7c20c-7a7b-42c7-b9e8-f5efc93a0305] Pending
helpers_test.go:344: "task-pv-pod-restore" [eee7c20c-7a7b-42c7-b9e8-f5efc93a0305] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eee7c20c-7a7b-42c7-b9e8-f5efc93a0305] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002952739s
addons_test.go:632: (dbg) Run:  kubectl --context addons-146898 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-146898 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-146898 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.510205147s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-146898 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-kjw5k" [ec0a078c-e909-464f-9759-47754a309b78] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-kjw5k" [ec0a078c-e909-464f-9759-47754a309b78] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003609265s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 addons disable headlamp --alsologtostderr -v=1: (5.599729285s)
--- PASS: TestAddons/parallel/Headlamp (21.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-v6bkp" [649ce93e-2e96-450a-a721-fc40ef295a58] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003799647s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-146898
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-146898 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-146898 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fcdadcbc-fbf0-4903-9c47-e86e694f1c77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fcdadcbc-fbf0-4903-9c47-e86e694f1c77] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fcdadcbc-fbf0-4903-9c47-e86e694f1c77] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003534963s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-146898 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 ssh "cat /opt/local-path-provisioner/pvc-b8279e68-d1f4-45e9-8a5a-4efa6552cee5_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-146898 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-146898 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.98398911s)
--- PASS: TestAddons/parallel/LocalPath (54.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c58zx" [203e32d0-800d-4b0e-acc3-caf43f35078e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002828526s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-146898
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-762r2" [490c9d90-d0a8-4ffd-a93f-ccbafe35ebc0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002898181s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-146898 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-146898 addons disable yakd --alsologtostderr -v=1: (5.781700891s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-146898
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-146898: (11.800412264s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-146898
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-146898
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-146898
--- PASS: TestAddons/StoppedEnableDisable (12.05s)

                                                
                                    
x
+
TestCertOptions (24.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-494057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-494057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.010637925s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-494057 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-494057 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-494057 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-494057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-494057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-494057: (2.017818549s)
--- PASS: TestCertOptions (24.68s)

                                                
                                    
x
+
TestCertExpiration (221.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-762945 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-762945 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.280905779s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-762945 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-762945 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.679193593s)
helpers_test.go:175: Cleaning up "cert-expiration-762945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-762945
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-762945: (2.531119339s)
--- PASS: TestCertExpiration (221.49s)

                                                
                                    
x
+
TestForceSystemdFlag (24.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-607728 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-607728 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.465883765s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-607728 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-607728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-607728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-607728: (2.314167448s)
--- PASS: TestForceSystemdFlag (24.03s)

                                                
                                    
x
+
TestForceSystemdEnv (36.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-125414 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-125414 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.687482038s)
helpers_test.go:175: Cleaning up "force-systemd-env-125414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-125414
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-125414: (4.727641911s)
--- PASS: TestForceSystemdEnv (36.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                    
x
+
TestErrorSpam/setup (20.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-076345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-076345 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-076345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-076345 --driver=docker  --container-runtime=crio: (20.214737489s)
--- PASS: TestErrorSpam/setup (20.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 stop: (1.160792803s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076345 --log_dir /tmp/nospam-076345 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19446-13813/.minikube/files/etc/test/nested/copy/20599/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-712264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.871116557s)
--- PASS: TestFunctional/serial/StartWithProxy (41.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-712264 --alsologtostderr -v=8: (33.953336846s)
functional_test.go:663: soft start took 33.954029245s for "functional-712264" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-712264 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 cache add registry.k8s.io/pause:3.3: (1.099462733s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-712264 /tmp/TestFunctionalserialCacheCmdcacheadd_local3881854462/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache add minikube-local-cache-test:functional-712264
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 cache add minikube-local-cache-test:functional-712264: (1.803546223s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache delete minikube-local-cache-test:functional-712264
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-712264
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.404233ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 kubectl -- --context functional-712264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-712264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-712264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.534382185s)
functional_test.go:761: restart took 38.5345161s for "functional-712264" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-712264 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 logs: (1.305117099s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 logs --file /tmp/TestFunctionalserialLogsFileCmd3375184233/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 logs --file /tmp/TestFunctionalserialLogsFileCmd3375184233/001/logs.txt: (1.330801511s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-712264 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-712264
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-712264: exit status 115 (314.09641ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30628 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-712264 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 config get cpus: exit status 14 (75.180215ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 config get cpus: exit status 14 (53.599192ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-712264 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-712264 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 62645: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-712264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (136.456122ms)

                                                
                                                
-- stdout --
	* [functional-712264] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:21:32.614871   62202 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:21:32.614995   62202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:21:32.615005   62202 out.go:304] Setting ErrFile to fd 2...
	I0814 16:21:32.615011   62202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:21:32.615187   62202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:21:32.615715   62202 out.go:298] Setting JSON to false
	I0814 16:21:32.616746   62202 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3837,"bootTime":1723648656,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:21:32.616800   62202 start.go:139] virtualization: kvm guest
	I0814 16:21:32.618912   62202 out.go:177] * [functional-712264] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:21:32.620531   62202 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:21:32.620574   62202 notify.go:220] Checking for updates...
	I0814 16:21:32.623217   62202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:21:32.624457   62202 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:21:32.625809   62202 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:21:32.627100   62202 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:21:32.628395   62202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:21:32.630067   62202 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:21:32.630509   62202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:21:32.653159   62202 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:21:32.653280   62202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:21:32.700199   62202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-14 16:21:32.691689515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:21:32.700308   62202 docker.go:307] overlay module found
	I0814 16:21:32.702113   62202 out.go:177] * Using the docker driver based on existing profile
	I0814 16:21:32.703491   62202 start.go:297] selected driver: docker
	I0814 16:21:32.703502   62202 start.go:901] validating driver "docker" against &{Name:functional-712264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-712264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:21:32.703604   62202 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:21:32.705532   62202 out.go:177] 
	W0814 16:21:32.706768   62202 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 16:21:32.708096   62202 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-712264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (136.036606ms)

                                                
                                                
-- stdout --
	* [functional-712264] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:21:32.935086   62398 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:21:32.935190   62398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:21:32.935200   62398 out.go:304] Setting ErrFile to fd 2...
	I0814 16:21:32.935204   62398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:21:32.935483   62398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:21:32.935998   62398 out.go:298] Setting JSON to false
	I0814 16:21:32.937059   62398 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3837,"bootTime":1723648656,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:21:32.937123   62398 start.go:139] virtualization: kvm guest
	I0814 16:21:32.939299   62398 out.go:177] * [functional-712264] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0814 16:21:32.940789   62398 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:21:32.940804   62398 notify.go:220] Checking for updates...
	I0814 16:21:32.943405   62398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:21:32.944833   62398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:21:32.946372   62398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:21:32.947586   62398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:21:32.948862   62398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:21:32.950641   62398 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:21:32.951078   62398 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:21:32.972353   62398 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:21:32.972465   62398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:21:33.018097   62398 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-14 16:21:33.00914287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:21:33.018230   62398 docker.go:307] overlay module found
	I0814 16:21:33.020163   62398 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0814 16:21:33.021459   62398 start.go:297] selected driver: docker
	I0814 16:21:33.021478   62398 start.go:901] validating driver "docker" against &{Name:functional-712264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-712264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:21:33.021579   62398 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:21:33.024288   62398 out.go:177] 
	W0814 16:21:33.025561   62398 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 16:21:33.026981   62398 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-712264 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-712264 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jgrs7" [c6836908-7681-41cc-9686-d954ff42465a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jgrs7" [c6836908-7681-41cc-9686-d954ff42465a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.027712952s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31550
functional_test.go:1675: http://192.168.49.2:31550: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-jgrs7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31550
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [40d72149-f766-46e9-bae3-382b4f3f9ce8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003333388s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-712264 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-712264 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-712264 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-712264 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-712264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a589ad12-4b7f-481c-bb1c-57af0a4a2c71] Pending
helpers_test.go:344: "sp-pod" [a589ad12-4b7f-481c-bb1c-57af0a4a2c71] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a589ad12-4b7f-481c-bb1c-57af0a4a2c71] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003216507s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-712264 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-712264 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-712264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a47d0cec-e8e9-4ddf-b9e3-f03a776b96b9] Pending
helpers_test.go:344: "sp-pod" [a47d0cec-e8e9-4ddf-b9e3-f03a776b96b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a47d0cec-e8e9-4ddf-b9e3-f03a776b96b9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.00414602s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-712264 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh -n functional-712264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cp functional-712264:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd172836613/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh -n functional-712264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh -n functional-712264 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-712264 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-d8c7c" [878bb4d4-e885-4a5c-93f6-7a45f33d2435] Pending
helpers_test.go:344: "mysql-6cdb49bbb-d8c7c" [878bb4d4-e885-4a5c-93f6-7a45f33d2435] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-d8c7c" [878bb4d4-e885-4a5c-93f6-7a45f33d2435] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004490566s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712264 exec mysql-6cdb49bbb-d8c7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-712264 exec mysql-6cdb49bbb-d8c7c -- mysql -ppassword -e "show databases;": exit status 1 (107.00329ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712264 exec mysql-6cdb49bbb-d8c7c -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-712264 exec mysql-6cdb49bbb-d8c7c -- mysql -ppassword -e "show databases;": exit status 1 (98.35062ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712264 exec mysql-6cdb49bbb-d8c7c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20599/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /etc/test/nested/copy/20599/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20599.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /etc/ssl/certs/20599.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20599.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /usr/share/ca-certificates/20599.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/205992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /etc/ssl/certs/205992.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/205992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /usr/share/ca-certificates/205992.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-712264 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "sudo systemctl is-active docker": exit status 1 (239.908431ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "sudo systemctl is-active containerd": exit status 1 (235.686604ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712264 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-712264
localhost/kicbase/echo-server:functional-712264
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712264 image ls --format short --alsologtostderr:
I0814 16:21:39.010478   63389 out.go:291] Setting OutFile to fd 1 ...
I0814 16:21:39.010593   63389 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.010602   63389 out.go:304] Setting ErrFile to fd 2...
I0814 16:21:39.010607   63389 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.010822   63389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
I0814 16:21:39.011543   63389 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.011654   63389 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.012158   63389 cli_runner.go:164] Run: docker container inspect functional-712264 --format={{.State.Status}}
I0814 16:21:39.037524   63389 ssh_runner.go:195] Run: systemctl --version
I0814 16:21:39.037580   63389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712264
I0814 16:21:39.057522   63389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/functional-712264/id_rsa Username:docker}
I0814 16:21:39.153187   63389 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712264 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
| localhost/minikube-local-cache-test     | functional-712264  | facd9b0a374ed | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-712264  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712264 image ls --format table --alsologtostderr:
I0814 16:21:43.150377   65001 out.go:291] Setting OutFile to fd 1 ...
I0814 16:21:43.150521   65001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:43.150531   65001 out.go:304] Setting ErrFile to fd 2...
I0814 16:21:43.150535   65001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:43.150716   65001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
I0814 16:21:43.151406   65001 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:43.151505   65001 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:43.152008   65001 cli_runner.go:164] Run: docker container inspect functional-712264 --format={{.State.Status}}
I0814 16:21:43.172147   65001 ssh_runner.go:195] Run: systemctl --version
I0814 16:21:43.172205   65001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712264
I0814 16:21:43.194937   65001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/functional-712264/id_rsa Username:docker}
I0814 16:21:43.289243   65001 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712264 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce5
60a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068794"},{"id":"56cc512116c8f894f11ce1995460
aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31
.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"facd9b0a374ed7ca3dcd781e3d8719894108bc65fcd23e6d98cb83eb3a909fbc","repoDigests":["localhost/minikube-local-cache-test@sha256:2d133c76b198ae0bbaf0a446da93d9ddd2967701292a52bb5001af37cb0b1fc9"],"repoTags":["localhost/minikub
e-local-cache-test:functional-712264"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da
31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-712264"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"si
ze":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"0184c1613d92931126feb4c548e5da11015513b
9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712264 image ls --format json --alsologtostderr:
I0814 16:21:42.971188   64907 out.go:291] Setting OutFile to fd 1 ...
I0814 16:21:42.971318   64907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:42.971341   64907 out.go:304] Setting ErrFile to fd 2...
I0814 16:21:42.971346   64907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:42.971546   64907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
I0814 16:21:42.972111   64907 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:42.972219   64907 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:42.972632   64907 cli_runner.go:164] Run: docker container inspect functional-712264 --format={{.State.Status}}
I0814 16:21:42.994867   64907 ssh_runner.go:195] Run: systemctl --version
I0814 16:21:42.994915   64907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712264
I0814 16:21:43.013233   64907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/functional-712264/id_rsa Username:docker}
I0814 16:21:43.101743   64907 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712264 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57
repoTags:
- docker.io/library/nginx:alpine
size: "45068794"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-712264
size: "4943877"
- id: facd9b0a374ed7ca3dcd781e3d8719894108bc65fcd23e6d98cb83eb3a909fbc
repoDigests:
- localhost/minikube-local-cache-test@sha256:2d133c76b198ae0bbaf0a446da93d9ddd2967701292a52bb5001af37cb0b1fc9
repoTags:
- localhost/minikube-local-cache-test:functional-712264
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712264 image ls --format yaml --alsologtostderr:
I0814 16:21:39.239228   63474 out.go:291] Setting OutFile to fd 1 ...
I0814 16:21:39.239845   63474 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.239894   63474 out.go:304] Setting ErrFile to fd 2...
I0814 16:21:39.239914   63474 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.243092   63474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
I0814 16:21:39.243943   63474 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.244138   63474 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.244756   63474 cli_runner.go:164] Run: docker container inspect functional-712264 --format={{.State.Status}}
I0814 16:21:39.264908   63474 ssh_runner.go:195] Run: systemctl --version
I0814 16:21:39.264952   63474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712264
I0814 16:21:39.283960   63474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/functional-712264/id_rsa Username:docker}
I0814 16:21:39.378708   63474 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh pgrep buildkitd: exit status 1 (268.304017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image build -t localhost/my-image:functional-712264 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 image build -t localhost/my-image:functional-712264 testdata/build --alsologtostderr: (3.857977021s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712264 image build -t localhost/my-image:functional-712264 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 702d11fef35
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-712264
--> 4a5b92f088c
Successfully tagged localhost/my-image:functional-712264
4a5b92f088c751ba8d78b9b4febe4c1b31880b95a149a28e53fac2765d43d495
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712264 image build -t localhost/my-image:functional-712264 testdata/build --alsologtostderr:
I0814 16:21:39.762053   63780 out.go:291] Setting OutFile to fd 1 ...
I0814 16:21:39.763503   63780 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.763516   63780 out.go:304] Setting ErrFile to fd 2...
I0814 16:21:39.763522   63780 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:21:39.763885   63780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
I0814 16:21:39.764801   63780 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.765715   63780 config.go:182] Loaded profile config "functional-712264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:21:39.766320   63780 cli_runner.go:164] Run: docker container inspect functional-712264 --format={{.State.Status}}
I0814 16:21:39.784651   63780 ssh_runner.go:195] Run: systemctl --version
I0814 16:21:39.784708   63780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712264
I0814 16:21:39.802425   63780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/functional-712264/id_rsa Username:docker}
I0814 16:21:39.889192   63780 build_images.go:161] Building image from path: /tmp/build.4266698665.tar
I0814 16:21:39.889262   63780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0814 16:21:39.897610   63780 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4266698665.tar
I0814 16:21:39.901003   63780 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4266698665.tar: stat -c "%s %y" /var/lib/minikube/build/build.4266698665.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4266698665.tar': No such file or directory
I0814 16:21:39.901045   63780 ssh_runner.go:362] scp /tmp/build.4266698665.tar --> /var/lib/minikube/build/build.4266698665.tar (3072 bytes)
I0814 16:21:39.939765   63780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4266698665
I0814 16:21:39.948357   63780 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4266698665 -xf /var/lib/minikube/build/build.4266698665.tar
I0814 16:21:39.956979   63780 crio.go:315] Building image: /var/lib/minikube/build/build.4266698665
I0814 16:21:39.957073   63780 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-712264 /var/lib/minikube/build/build.4266698665 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0814 16:21:43.553359   63780 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-712264 /var/lib/minikube/build/build.4266698665 --cgroup-manager=cgroupfs: (3.596255781s)
I0814 16:21:43.553449   63780 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4266698665
I0814 16:21:43.561904   63780 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4266698665.tar
I0814 16:21:43.569751   63780 build_images.go:217] Built localhost/my-image:functional-712264 from /tmp/build.4266698665.tar
I0814 16:21:43.569783   63780 build_images.go:133] succeeded building to: functional-712264
I0814 16:21:43.569789   63780 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.936340012s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-712264
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image load --daemon kicbase/echo-server:functional-712264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image load --daemon kicbase/echo-server:functional-712264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-712264
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image load --daemon kicbase/echo-server:functional-712264 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 image load --daemon kicbase/echo-server:functional-712264 --alsologtostderr: (1.038868749s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image save kicbase/echo-server:functional-712264 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 image save kicbase/echo-server:functional-712264 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.072011812s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 58767: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-712264 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4529eac5-85b1-4b37-ac74-3881de9c2318] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4529eac5-85b1-4b37-ac74-3881de9c2318] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.004016123s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image rm kicbase/echo-server:functional-712264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-712264 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.767120003s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-712264
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 image save --daemon kicbase/echo-server:functional-712264 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-712264
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-712264 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-712264 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ntbbb" [5bad66ab-5125-4e3d-abcf-9d0d74aeb842] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ntbbb" [5bad66ab-5125-4e3d-abcf-9d0d74aeb842] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003339341s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "298.664425ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.866443ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service list -o json
functional_test.go:1494: Took "917.406797ms" to run "out/minikube-linux-amd64 -p functional-712264 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "353.552981ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.729152ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdany-port2646466746/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723652490698306272" to /tmp/TestFunctionalparallelMountCmdany-port2646466746/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723652490698306272" to /tmp/TestFunctionalparallelMountCmdany-port2646466746/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723652490698306272" to /tmp/TestFunctionalparallelMountCmdany-port2646466746/001/test-1723652490698306272
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.402854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 16:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 16:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 16:21 test-1723652490698306272
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh cat /mount-9p/test-1723652490698306272
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-712264 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [97973267-71ea-42b5-8129-c5493869787c] Pending
helpers_test.go:344: "busybox-mount" [97973267-71ea-42b5-8129-c5493869787c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [97973267-71ea-42b5-8129-c5493869787c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [97973267-71ea-42b5-8129-c5493869787c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002955769s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-712264 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdany-port2646466746/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30837
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30837
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-712264 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.180.37 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-712264 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdspecific-port1647592040/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.086073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdspecific-port1647592040/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "sudo umount -f /mount-9p": exit status 1 (263.667337ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-712264 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdspecific-port1647592040/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T" /mount1: exit status 1 (378.718296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712264 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-712264 --kill=true
2024/08/14 16:21:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3687098670/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-712264
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-712264
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-712264
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912396 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0814 16:22:20.770943   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:20.777736   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:20.789119   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:20.810573   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:20.851950   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:20.933371   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:21.094936   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:21.416629   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:22.058934   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:23.340292   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:25.902547   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:31.024276   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:22:41.266271   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:01.747628   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-912396 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.353488737s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (101.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-912396 -- rollout status deployment/busybox: (3.488287981s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-4cmml -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-7jh8t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-ffqct -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-4cmml -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-7jh8t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-ffqct -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-4cmml -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-7jh8t -- nslookup kubernetes.default.svc.cluster.local
E0814 16:23:42.709352   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-ffqct -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-4cmml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-4cmml -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-7jh8t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-7jh8t -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-ffqct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912396 -- exec busybox-7dff88458-ffqct -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-912396 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-912396 -v=7 --alsologtostderr: (35.647132817s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-912396 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp testdata/cp-test.txt ha-912396:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579357112/001/cp-test_ha-912396.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396:/home/docker/cp-test.txt ha-912396-m02:/home/docker/cp-test_ha-912396_ha-912396-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test_ha-912396_ha-912396-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396:/home/docker/cp-test.txt ha-912396-m03:/home/docker/cp-test_ha-912396_ha-912396-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test_ha-912396_ha-912396-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396:/home/docker/cp-test.txt ha-912396-m04:/home/docker/cp-test_ha-912396_ha-912396-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test_ha-912396_ha-912396-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp testdata/cp-test.txt ha-912396-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579357112/001/cp-test_ha-912396-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m02:/home/docker/cp-test.txt ha-912396:/home/docker/cp-test_ha-912396-m02_ha-912396.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test_ha-912396-m02_ha-912396.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m02:/home/docker/cp-test.txt ha-912396-m03:/home/docker/cp-test_ha-912396-m02_ha-912396-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test_ha-912396-m02_ha-912396-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m02:/home/docker/cp-test.txt ha-912396-m04:/home/docker/cp-test_ha-912396-m02_ha-912396-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test_ha-912396-m02_ha-912396-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp testdata/cp-test.txt ha-912396-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579357112/001/cp-test_ha-912396-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m03:/home/docker/cp-test.txt ha-912396:/home/docker/cp-test_ha-912396-m03_ha-912396.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test_ha-912396-m03_ha-912396.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m03:/home/docker/cp-test.txt ha-912396-m02:/home/docker/cp-test_ha-912396-m03_ha-912396-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test_ha-912396-m03_ha-912396-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m03:/home/docker/cp-test.txt ha-912396-m04:/home/docker/cp-test_ha-912396-m03_ha-912396-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test_ha-912396-m03_ha-912396-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp testdata/cp-test.txt ha-912396-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579357112/001/cp-test_ha-912396-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m04:/home/docker/cp-test.txt ha-912396:/home/docker/cp-test_ha-912396-m04_ha-912396.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396 "sudo cat /home/docker/cp-test_ha-912396-m04_ha-912396.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m04:/home/docker/cp-test.txt ha-912396-m02:/home/docker/cp-test_ha-912396-m04_ha-912396-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m02 "sudo cat /home/docker/cp-test_ha-912396-m04_ha-912396-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 cp ha-912396-m04:/home/docker/cp-test.txt ha-912396-m03:/home/docker/cp-test_ha-912396-m04_ha-912396-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 ssh -n ha-912396-m03 "sudo cat /home/docker/cp-test_ha-912396-m04_ha-912396-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-912396 node stop m02 -v=7 --alsologtostderr: (11.834907627s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr: exit status 7 (650.525068ms)

                                                
                                                
-- stdout --
	ha-912396
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912396-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-912396-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912396-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:24:48.369516   86105 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:24:48.369745   86105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:48.369759   86105 out.go:304] Setting ErrFile to fd 2...
	I0814 16:24:48.369766   86105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:48.369951   86105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:24:48.370164   86105 out.go:298] Setting JSON to false
	I0814 16:24:48.370204   86105 mustload.go:65] Loading cluster: ha-912396
	I0814 16:24:48.370248   86105 notify.go:220] Checking for updates...
	I0814 16:24:48.370648   86105 config.go:182] Loaded profile config "ha-912396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:24:48.370668   86105 status.go:255] checking status of ha-912396 ...
	I0814 16:24:48.371081   86105 cli_runner.go:164] Run: docker container inspect ha-912396 --format={{.State.Status}}
	I0814 16:24:48.389415   86105 status.go:330] ha-912396 host status = "Running" (err=<nil>)
	I0814 16:24:48.389454   86105 host.go:66] Checking if "ha-912396" exists ...
	I0814 16:24:48.389751   86105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-912396
	I0814 16:24:48.406610   86105 host.go:66] Checking if "ha-912396" exists ...
	I0814 16:24:48.406832   86105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:24:48.406880   86105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-912396
	I0814 16:24:48.424908   86105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/ha-912396/id_rsa Username:docker}
	I0814 16:24:48.518191   86105 ssh_runner.go:195] Run: systemctl --version
	I0814 16:24:48.522179   86105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:24:48.532985   86105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:24:48.587282   86105 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-14 16:24:48.576781908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:24:48.587814   86105 kubeconfig.go:125] found "ha-912396" server: "https://192.168.49.254:8443"
	I0814 16:24:48.587844   86105 api_server.go:166] Checking apiserver status ...
	I0814 16:24:48.587875   86105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:24:48.598293   86105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I0814 16:24:48.606878   86105 api_server.go:182] apiserver freezer: "9:freezer:/docker/f9c3aea973904021b39f45be7df67e687b240fa2ec819129f163b4e32bcb73f5/crio/crio-249cf118d88b27a79df0a425384d645115fec3294bb0daa6c59da0b9608a0c74"
	I0814 16:24:48.606951   86105 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f9c3aea973904021b39f45be7df67e687b240fa2ec819129f163b4e32bcb73f5/crio/crio-249cf118d88b27a79df0a425384d645115fec3294bb0daa6c59da0b9608a0c74/freezer.state
	I0814 16:24:48.614805   86105 api_server.go:204] freezer state: "THAWED"
	I0814 16:24:48.614836   86105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0814 16:24:48.620363   86105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0814 16:24:48.620388   86105 status.go:422] ha-912396 apiserver status = Running (err=<nil>)
	I0814 16:24:48.620400   86105 status.go:257] ha-912396 status: &{Name:ha-912396 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:24:48.620418   86105 status.go:255] checking status of ha-912396-m02 ...
	I0814 16:24:48.620675   86105 cli_runner.go:164] Run: docker container inspect ha-912396-m02 --format={{.State.Status}}
	I0814 16:24:48.637950   86105 status.go:330] ha-912396-m02 host status = "Stopped" (err=<nil>)
	I0814 16:24:48.637973   86105 status.go:343] host is not running, skipping remaining checks
	I0814 16:24:48.637982   86105 status.go:257] ha-912396-m02 status: &{Name:ha-912396-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:24:48.638010   86105 status.go:255] checking status of ha-912396-m03 ...
	I0814 16:24:48.638412   86105 cli_runner.go:164] Run: docker container inspect ha-912396-m03 --format={{.State.Status}}
	I0814 16:24:48.655312   86105 status.go:330] ha-912396-m03 host status = "Running" (err=<nil>)
	I0814 16:24:48.655337   86105 host.go:66] Checking if "ha-912396-m03" exists ...
	I0814 16:24:48.655568   86105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-912396-m03
	I0814 16:24:48.672155   86105 host.go:66] Checking if "ha-912396-m03" exists ...
	I0814 16:24:48.672422   86105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:24:48.672461   86105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-912396-m03
	I0814 16:24:48.688835   86105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/ha-912396-m03/id_rsa Username:docker}
	I0814 16:24:48.778087   86105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:24:48.788856   86105 kubeconfig.go:125] found "ha-912396" server: "https://192.168.49.254:8443"
	I0814 16:24:48.788887   86105 api_server.go:166] Checking apiserver status ...
	I0814 16:24:48.788927   86105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:24:48.798832   86105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1392/cgroup
	I0814 16:24:48.808509   86105 api_server.go:182] apiserver freezer: "9:freezer:/docker/3396efe08f707ba10af794a7da4adc4edc493de0e161e8f080818f047b0b9c26/crio/crio-bd54fe949e53341b77463c3f481e7e1097d38420583624c2fff0716e45ed1220"
	I0814 16:24:48.808575   86105 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3396efe08f707ba10af794a7da4adc4edc493de0e161e8f080818f047b0b9c26/crio/crio-bd54fe949e53341b77463c3f481e7e1097d38420583624c2fff0716e45ed1220/freezer.state
	I0814 16:24:48.817436   86105 api_server.go:204] freezer state: "THAWED"
	I0814 16:24:48.817463   86105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0814 16:24:48.821847   86105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0814 16:24:48.821871   86105 status.go:422] ha-912396-m03 apiserver status = Running (err=<nil>)
	I0814 16:24:48.821880   86105 status.go:257] ha-912396-m03 status: &{Name:ha-912396-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:24:48.821894   86105 status.go:255] checking status of ha-912396-m04 ...
	I0814 16:24:48.822140   86105 cli_runner.go:164] Run: docker container inspect ha-912396-m04 --format={{.State.Status}}
	I0814 16:24:48.841106   86105 status.go:330] ha-912396-m04 host status = "Running" (err=<nil>)
	I0814 16:24:48.841131   86105 host.go:66] Checking if "ha-912396-m04" exists ...
	I0814 16:24:48.841400   86105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-912396-m04
	I0814 16:24:48.859427   86105 host.go:66] Checking if "ha-912396-m04" exists ...
	I0814 16:24:48.859696   86105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:24:48.859733   86105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-912396-m04
	I0814 16:24:48.877620   86105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/ha-912396-m04/id_rsa Username:docker}
	I0814 16:24:48.966233   86105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:24:48.977601   86105 status.go:257] ha-912396-m04 status: &{Name:ha-912396-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 node start m02 -v=7 --alsologtostderr
E0814 16:25:04.631352   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-912396 node start m02 -v=7 --alsologtostderr: (47.644093562s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (188.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-912396 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-912396 -v=7 --alsologtostderr
E0814 16:26:05.703426   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:05.709865   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:05.721200   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:05.742625   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:05.784132   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:05.865697   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:06.027335   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:06.349155   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:06.991287   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:08.272928   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:10.834469   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-912396 -v=7 --alsologtostderr: (36.640306782s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912396 --wait=true -v=7 --alsologtostderr
E0814 16:26:15.955830   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:26.197917   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:26:46.680108   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:27:20.771514   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:27:27.642406   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:27:48.473123   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-912396 --wait=true -v=7 --alsologtostderr: (2m31.822095763s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-912396
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (188.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 node delete m03 -v=7 --alsologtostderr
E0814 16:28:49.563794   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-912396 node delete m03 -v=7 --alsologtostderr: (10.56953725s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-912396 stop -v=7 --alsologtostderr: (35.408782875s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr: exit status 7 (100.320714ms)

                                                
                                                
-- stdout --
	ha-912396
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-912396-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-912396-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:29:34.407985  104127 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:29:34.408246  104127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:29:34.408255  104127 out.go:304] Setting ErrFile to fd 2...
	I0814 16:29:34.408259  104127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:29:34.408437  104127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:29:34.408595  104127 out.go:298] Setting JSON to false
	I0814 16:29:34.408621  104127 mustload.go:65] Loading cluster: ha-912396
	I0814 16:29:34.408718  104127 notify.go:220] Checking for updates...
	I0814 16:29:34.408997  104127 config.go:182] Loaded profile config "ha-912396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:29:34.409014  104127 status.go:255] checking status of ha-912396 ...
	I0814 16:29:34.409503  104127 cli_runner.go:164] Run: docker container inspect ha-912396 --format={{.State.Status}}
	I0814 16:29:34.428352  104127 status.go:330] ha-912396 host status = "Stopped" (err=<nil>)
	I0814 16:29:34.428380  104127 status.go:343] host is not running, skipping remaining checks
	I0814 16:29:34.428391  104127 status.go:257] ha-912396 status: &{Name:ha-912396 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:29:34.428423  104127 status.go:255] checking status of ha-912396-m02 ...
	I0814 16:29:34.428690  104127 cli_runner.go:164] Run: docker container inspect ha-912396-m02 --format={{.State.Status}}
	I0814 16:29:34.445923  104127 status.go:330] ha-912396-m02 host status = "Stopped" (err=<nil>)
	I0814 16:29:34.445973  104127 status.go:343] host is not running, skipping remaining checks
	I0814 16:29:34.445986  104127 status.go:257] ha-912396-m02 status: &{Name:ha-912396-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:29:34.446014  104127 status.go:255] checking status of ha-912396-m04 ...
	I0814 16:29:34.446363  104127 cli_runner.go:164] Run: docker container inspect ha-912396-m04 --format={{.State.Status}}
	I0814 16:29:34.463327  104127 status.go:330] ha-912396-m04 host status = "Stopped" (err=<nil>)
	I0814 16:29:34.463351  104127 status.go:343] host is not running, skipping remaining checks
	I0814 16:29:34.463357  104127 status.go:257] ha-912396-m04 status: &{Name:ha-912396-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (115.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912396 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0814 16:31:05.704137   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-912396 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m54.650062538s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (115.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-912396 --control-plane -v=7 --alsologtostderr
E0814 16:31:33.405199   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-912396 --control-plane -v=7 --alsologtostderr: (41.606170699s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-912396 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-529740 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0814 16:32:20.771371   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-529740 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (44.637533498s)
--- PASS: TestJSONOutput/start/Command (44.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-529740 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-529740 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-529740 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-529740 --output=json --user=testUser: (5.777246313s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-451279 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-451279 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.58891ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6373458b-4569-4e8f-9f75-ce37ee96850c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-451279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"79ed2fd0-be32-4e84-9991-8e18552e37e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"98e2b00d-1efe-4385-897a-328928ec1565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90f2cd1b-911b-4e92-baa8-cfd32459683d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig"}}
	{"specversion":"1.0","id":"b7847c7b-0e72-40d3-8ada-083f5e430031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube"}}
	{"specversion":"1.0","id":"21f0ebec-9884-4094-af08-09a9801311e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"680047bd-42f7-46e9-927a-36a90e1a4ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3bdfee3-d717-456a-bb1e-a7f0d3d9e8f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-451279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-451279
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-904217 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-904217 --network=: (32.338228842s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-904217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-904217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-904217: (1.956045804s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-032113 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-032113 --network=bridge: (24.704639268s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-032113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-032113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-032113: (1.887244113s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.61s)

                                                
                                    
x
+
TestKicExistingNetwork (25.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-412500 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-412500 --network=existing-network: (23.558922361s)
helpers_test.go:175: Cleaning up "existing-network-412500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-412500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-412500: (1.901728421s)
--- PASS: TestKicExistingNetwork (25.60s)

                                                
                                    
x
+
TestKicCustomSubnet (23.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-670376 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-670376 --subnet=192.168.60.0/24: (21.18112034s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-670376 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-670376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-670376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-670376: (1.947557804s)
--- PASS: TestKicCustomSubnet (23.15s)

                                                
                                    
x
+
TestKicStaticIP (22.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-153614 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-153614 --static-ip=192.168.200.200: (20.523235477s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-153614 ip
helpers_test.go:175: Cleaning up "static-ip-153614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-153614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-153614: (1.965453464s)
--- PASS: TestKicStaticIP (22.61s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-539184 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-539184 --driver=docker  --container-runtime=crio: (24.309213341s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-542266 --driver=docker  --container-runtime=crio
E0814 16:36:05.703897   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-542266 --driver=docker  --container-runtime=crio: (23.558952644s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-539184
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-542266
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-542266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-542266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-542266: (1.828049434s)
helpers_test.go:175: Cleaning up "first-539184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-539184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-539184: (2.148915804s)
--- PASS: TestMinikubeProfile (52.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-078645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-078645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.884250912s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-078645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-095468 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-095468 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.172717739s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095468 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-078645 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-078645 --alsologtostderr -v=5: (1.600093822s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095468 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-095468
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-095468: (1.168360718s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-095468
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-095468: (6.881684186s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095468 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0814 16:37:20.770973   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.935107689s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-650888 -- rollout status deployment/busybox: (3.791466312s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-wbp5t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-zrjnw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-wbp5t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-zrjnw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-wbp5t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-zrjnw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-wbp5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-wbp5t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-zrjnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-650888 -- exec busybox-7dff88458-zrjnw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-650888 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-650888 -v 3 --alsologtostderr: (29.262810691s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-650888 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp testdata/cp-test.txt multinode-650888:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512626337/001/cp-test_multinode-650888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888:/home/docker/cp-test.txt multinode-650888-m02:/home/docker/cp-test_multinode-650888_multinode-650888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test_multinode-650888_multinode-650888-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888:/home/docker/cp-test.txt multinode-650888-m03:/home/docker/cp-test_multinode-650888_multinode-650888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test_multinode-650888_multinode-650888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp testdata/cp-test.txt multinode-650888-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512626337/001/cp-test_multinode-650888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m02:/home/docker/cp-test.txt multinode-650888:/home/docker/cp-test_multinode-650888-m02_multinode-650888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test_multinode-650888-m02_multinode-650888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m02:/home/docker/cp-test.txt multinode-650888-m03:/home/docker/cp-test_multinode-650888-m02_multinode-650888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test_multinode-650888-m02_multinode-650888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp testdata/cp-test.txt multinode-650888-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512626337/001/cp-test_multinode-650888-m03.txt
E0814 16:38:43.835132   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m03:/home/docker/cp-test.txt multinode-650888:/home/docker/cp-test_multinode-650888-m03_multinode-650888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888 "sudo cat /home/docker/cp-test_multinode-650888-m03_multinode-650888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 cp multinode-650888-m03:/home/docker/cp-test.txt multinode-650888-m02:/home/docker/cp-test_multinode-650888-m03_multinode-650888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 ssh -n multinode-650888-m02 "sudo cat /home/docker/cp-test_multinode-650888-m03_multinode-650888-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-650888 node stop m03: (1.167182651s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650888 status: exit status 7 (447.100738ms)

                                                
                                                
-- stdout --
	multinode-650888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-650888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-650888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr: exit status 7 (454.513725ms)

                                                
                                                
-- stdout --
	multinode-650888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-650888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-650888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:38:47.660938  170012 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:38:47.661408  170012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:38:47.661418  170012 out.go:304] Setting ErrFile to fd 2...
	I0814 16:38:47.661426  170012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:38:47.661624  170012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:38:47.661838  170012 out.go:298] Setting JSON to false
	I0814 16:38:47.661871  170012 mustload.go:65] Loading cluster: multinode-650888
	I0814 16:38:47.661900  170012 notify.go:220] Checking for updates...
	I0814 16:38:47.662293  170012 config.go:182] Loaded profile config "multinode-650888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:38:47.662311  170012 status.go:255] checking status of multinode-650888 ...
	I0814 16:38:47.662689  170012 cli_runner.go:164] Run: docker container inspect multinode-650888 --format={{.State.Status}}
	I0814 16:38:47.680236  170012 status.go:330] multinode-650888 host status = "Running" (err=<nil>)
	I0814 16:38:47.680267  170012 host.go:66] Checking if "multinode-650888" exists ...
	I0814 16:38:47.680621  170012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-650888
	I0814 16:38:47.697889  170012 host.go:66] Checking if "multinode-650888" exists ...
	I0814 16:38:47.698138  170012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:38:47.698190  170012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-650888
	I0814 16:38:47.714938  170012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/multinode-650888/id_rsa Username:docker}
	I0814 16:38:47.801950  170012 ssh_runner.go:195] Run: systemctl --version
	I0814 16:38:47.805956  170012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:38:47.816172  170012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:38:47.869454  170012 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-14 16:38:47.860007149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:38:47.869971  170012 kubeconfig.go:125] found "multinode-650888" server: "https://192.168.67.2:8443"
	I0814 16:38:47.869995  170012 api_server.go:166] Checking apiserver status ...
	I0814 16:38:47.870036  170012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:38:47.880384  170012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1502/cgroup
	I0814 16:38:47.888988  170012 api_server.go:182] apiserver freezer: "9:freezer:/docker/c5081ec567e8f89739e9796fec99525e1cf3e0d02a9e81b242caa28c432c7887/crio/crio-b7ddf503572eb9c5ba0306d5f2e32dea5cfb6841a56aa780d0c1b1851248fcf5"
	I0814 16:38:47.889107  170012 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5081ec567e8f89739e9796fec99525e1cf3e0d02a9e81b242caa28c432c7887/crio/crio-b7ddf503572eb9c5ba0306d5f2e32dea5cfb6841a56aa780d0c1b1851248fcf5/freezer.state
	I0814 16:38:47.896552  170012 api_server.go:204] freezer state: "THAWED"
	I0814 16:38:47.896582  170012 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0814 16:38:47.900117  170012 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0814 16:38:47.900138  170012 status.go:422] multinode-650888 apiserver status = Running (err=<nil>)
	I0814 16:38:47.900148  170012 status.go:257] multinode-650888 status: &{Name:multinode-650888 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:38:47.900174  170012 status.go:255] checking status of multinode-650888-m02 ...
	I0814 16:38:47.900401  170012 cli_runner.go:164] Run: docker container inspect multinode-650888-m02 --format={{.State.Status}}
	I0814 16:38:47.917315  170012 status.go:330] multinode-650888-m02 host status = "Running" (err=<nil>)
	I0814 16:38:47.917341  170012 host.go:66] Checking if "multinode-650888-m02" exists ...
	I0814 16:38:47.917583  170012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-650888-m02
	I0814 16:38:47.935430  170012 host.go:66] Checking if "multinode-650888-m02" exists ...
	I0814 16:38:47.935662  170012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:38:47.935699  170012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-650888-m02
	I0814 16:38:47.953650  170012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19446-13813/.minikube/machines/multinode-650888-m02/id_rsa Username:docker}
	I0814 16:38:48.045730  170012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:38:48.055990  170012 status.go:257] multinode-650888-m02 status: &{Name:multinode-650888-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:38:48.056025  170012 status.go:255] checking status of multinode-650888-m03 ...
	I0814 16:38:48.056286  170012 cli_runner.go:164] Run: docker container inspect multinode-650888-m03 --format={{.State.Status}}
	I0814 16:38:48.072401  170012 status.go:330] multinode-650888-m03 host status = "Stopped" (err=<nil>)
	I0814 16:38:48.072422  170012 status.go:343] host is not running, skipping remaining checks
	I0814 16:38:48.072430  170012 status.go:257] multinode-650888-m03 status: &{Name:multinode-650888-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-650888 node start m03 -v=7 --alsologtostderr: (8.352317105s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650888
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-650888
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-650888: (24.687987391s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650888 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650888 --wait=true -v=8 --alsologtostderr: (52.974396678s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650888
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-650888 node delete m03: (4.378656823s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-650888 stop: (23.515450125s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650888 status: exit status 7 (80.807391ms)

                                                
                                                
-- stdout --
	multinode-650888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-650888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr: exit status 7 (78.717395ms)

                                                
                                                
-- stdout --
	multinode-650888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-650888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:40:43.414439  179275 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:40:43.414682  179275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:40:43.414692  179275 out.go:304] Setting ErrFile to fd 2...
	I0814 16:40:43.414698  179275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:40:43.414920  179275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:40:43.415096  179275 out.go:298] Setting JSON to false
	I0814 16:40:43.415127  179275 mustload.go:65] Loading cluster: multinode-650888
	I0814 16:40:43.415222  179275 notify.go:220] Checking for updates...
	I0814 16:40:43.415516  179275 config.go:182] Loaded profile config "multinode-650888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:40:43.415533  179275 status.go:255] checking status of multinode-650888 ...
	I0814 16:40:43.415926  179275 cli_runner.go:164] Run: docker container inspect multinode-650888 --format={{.State.Status}}
	I0814 16:40:43.433448  179275 status.go:330] multinode-650888 host status = "Stopped" (err=<nil>)
	I0814 16:40:43.433478  179275 status.go:343] host is not running, skipping remaining checks
	I0814 16:40:43.433487  179275 status.go:257] multinode-650888 status: &{Name:multinode-650888 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:40:43.433513  179275 status.go:255] checking status of multinode-650888-m02 ...
	I0814 16:40:43.433861  179275 cli_runner.go:164] Run: docker container inspect multinode-650888-m02 --format={{.State.Status}}
	I0814 16:40:43.449899  179275 status.go:330] multinode-650888-m02 host status = "Stopped" (err=<nil>)
	I0814 16:40:43.449920  179275 status.go:343] host is not running, skipping remaining checks
	I0814 16:40:43.449926  179275 status.go:257] multinode-650888-m02 status: &{Name:multinode-650888-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0814 16:41:05.703685   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.122634294s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-650888 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-650888
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650888-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-650888-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.666533ms)

                                                
                                                
-- stdout --
	* [multinode-650888-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-650888-m02' is duplicated with machine name 'multinode-650888-m02' in profile 'multinode-650888'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-650888-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-650888-m03 --driver=docker  --container-runtime=crio: (23.38149388s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-650888
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-650888: exit status 80 (258.571621ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-650888 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-650888-m03 already exists in multinode-650888-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-650888-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-650888-m03: (1.825524395s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                    
x
+
TestPreload (116.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-366913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0814 16:42:20.771376   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:42:28.767562   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-366913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.389948957s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-366913 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-366913 image pull gcr.io/k8s-minikube/busybox: (2.549753402s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-366913
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-366913: (5.657129752s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-366913 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-366913 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.906319327s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-366913 image list
helpers_test.go:175: Cleaning up "test-preload-366913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-366913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-366913: (2.220256533s)
--- PASS: TestPreload (116.94s)

                                                
                                    
x
+
TestScheduledStopUnix (96.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-664020 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-664020 --memory=2048 --driver=docker  --container-runtime=crio: (19.52666738s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664020 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-664020 -n scheduled-stop-664020
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664020 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664020 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664020 -n scheduled-stop-664020
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-664020
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664020 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-664020
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-664020: exit status 7 (64.429144ms)

                                                
                                                
-- stdout --
	scheduled-stop-664020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664020 -n scheduled-stop-664020
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664020 -n scheduled-stop-664020: exit status 7 (59.012778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-664020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-664020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-664020: (5.359139139s)
--- PASS: TestScheduledStopUnix (96.16s)

                                                
                                    
x
+
TestInsufficientStorage (12.49s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-090173 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-090173 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.17813005s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfffee6d-ca42-44e2-801a-ba3c296ab5db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-090173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f970377-04c5-4ac4-964f-2f496be22190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"b044d7e0-1130-4a7e-ae9e-ebf09403856c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"be68b0d6-db43-4afc-92c8-3ada14ddc1cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig"}}
	{"specversion":"1.0","id":"e8ff5711-7dd4-49be-a38e-13e7c9ff91f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube"}}
	{"specversion":"1.0","id":"286288b1-b3b4-43ac-aa37-c4668e962658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fc8f4c99-e81c-4fb8-a1a3-700e72bdb27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2baedcf0-8022-4289-a07f-dc8a0a12f99b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"805f1706-5d8a-448e-a273-1ad4c89d9a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"276b424d-f9ff-43d9-b729-3d93bb850a74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"253a519f-2abb-4dc7-b3b1-7d8db0f099e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af55e85f-aca5-4b3b-9da3-5746616e85e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-090173\" primary control-plane node in \"insufficient-storage-090173\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bb65686-44a4-4355-8204-5d980e915a78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723567951-19429 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"32488d07-3cb7-4563-9a30-16d8c66f677f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"12b480f4-ecbe-4e52-830a-74e0a8270a29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-090173 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-090173 --output=json --layout=cluster: exit status 7 (260.3441ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090173","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090173","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 16:45:51.058588  201823 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-090173" does not appear in /home/jenkins/minikube-integration/19446-13813/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-090173 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-090173 --output=json --layout=cluster: exit status 7 (251.940469ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090173","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090173","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 16:45:51.311529  201921 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-090173" does not appear in /home/jenkins/minikube-integration/19446-13813/kubeconfig
	E0814 16:45:51.320818  201921 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/insufficient-storage-090173/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-090173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-090173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-090173: (1.803456939s)
--- PASS: TestInsufficientStorage (12.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0814 16:47:20.770487   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.626174018 start -p running-upgrade-303728 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.626174018 start -p running-upgrade-303728 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.999182056s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-303728 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-303728 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.277178328s)
helpers_test.go:175: Cleaning up "running-upgrade-303728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-303728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-303728: (5.869286704s)
--- PASS: TestRunningBinaryUpgrade (61.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (346.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.753505259s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-517887
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-517887: (1.189172293s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-517887 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-517887 status --format={{.Host}}: exit status 7 (58.401022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.316131373s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-517887 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (65.133765ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-517887] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-517887
	    minikube start -p kubernetes-upgrade-517887 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5178872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-517887 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-517887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.165435611s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-517887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-517887
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-517887: (2.203254916s)
--- PASS: TestKubernetesUpgrade (346.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1575006549 start -p missing-upgrade-459404 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1575006549 start -p missing-upgrade-459404 --memory=2200 --driver=docker  --container-runtime=crio: (1m40.019865509s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-459404
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-459404: (15.676938931s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-459404
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-459404 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-459404 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.937213744s)
helpers_test.go:175: Cleaning up "missing-upgrade-459404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-459404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-459404: (4.870950236s)
--- PASS: TestMissingContainerUpgrade (175.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (73.289567ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-115339] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115339 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115339 --driver=docker  --container-runtime=crio: (31.746313753s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-115339 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-228979 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-228979 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (209.546741ms)

                                                
                                                
-- stdout --
	* [false-228979] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:45:56.948534  204236 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:45:56.948648  204236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:45:56.948659  204236 out.go:304] Setting ErrFile to fd 2...
	I0814 16:45:56.948666  204236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:45:56.948881  204236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13813/.minikube/bin
	I0814 16:45:56.949718  204236 out.go:298] Setting JSON to false
	I0814 16:45:56.951073  204236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5301,"bootTime":1723648656,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:45:56.951164  204236 start.go:139] virtualization: kvm guest
	I0814 16:45:56.953657  204236 out.go:177] * [false-228979] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:45:56.955381  204236 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:45:56.955380  204236 notify.go:220] Checking for updates...
	I0814 16:45:56.956898  204236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:45:56.958214  204236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13813/kubeconfig
	I0814 16:45:56.959492  204236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13813/.minikube
	I0814 16:45:56.960983  204236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:45:56.962262  204236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:45:56.964382  204236 config.go:182] Loaded profile config "NoKubernetes-115339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:45:56.964544  204236 config.go:182] Loaded profile config "force-systemd-env-125414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:45:56.964684  204236 config.go:182] Loaded profile config "offline-crio-113387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:45:56.964788  204236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:45:57.003452  204236 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0814 16:45:57.003641  204236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0814 16:45:57.086808  204236 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:93 SystemTime:2024-08-14 16:45:57.07234998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0814 16:45:57.086952  204236 docker.go:307] overlay module found
	I0814 16:45:57.089467  204236 out.go:177] * Using the docker driver based on user configuration
	I0814 16:45:57.091350  204236 start.go:297] selected driver: docker
	I0814 16:45:57.091370  204236 start.go:901] validating driver "docker" against <nil>
	I0814 16:45:57.091388  204236 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:45:57.094413  204236 out.go:177] 
	W0814 16:45:57.096184  204236 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0814 16:45:57.097633  204236 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-228979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-228979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228979"

                                                
                                                
----------------------- debugLogs end: false-228979 [took: 7.369166509s] --------------------------------
helpers_test.go:175: Cleaning up "false-228979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-228979
--- PASS: TestNetworkPlugins/group/false (7.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --driver=docker  --container-runtime=crio: (18.99273113s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-115339 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-115339 status -o json: exit status 2 (366.67406ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-115339","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-115339
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-115339: (7.058204088s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115339 --no-kubernetes --driver=docker  --container-runtime=crio: (9.029231686s)
--- PASS: TestNoKubernetes/serial/Start (9.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1274134744 start -p stopped-upgrade-909810 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1274134744 start -p stopped-upgrade-909810 --memory=2200 --vm-driver=docker  --container-runtime=crio: (57.378695668s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1274134744 -p stopped-upgrade-909810 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1274134744 -p stopped-upgrade-909810 stop: (2.393867134s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-909810 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-909810 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.481377705s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-115339 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-115339 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.792624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (6.852165265s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-115339
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-115339: (1.193045371s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115339 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115339 --driver=docker  --container-runtime=crio: (6.988592961s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-115339 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-115339 "sudo systemctl is-active --quiet service kubelet": exit status 1 (342.710429ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-909810
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestPause/serial/Start (46.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-217362 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-217362 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.278688445s)
--- PASS: TestPause/serial/Start (46.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-217362 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-217362 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.669813834s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-217362 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-217362 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-217362 --output=json --layout=cluster: exit status 2 (285.500233ms)

                                                
                                                
-- stdout --
	{"Name":"pause-217362","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-217362","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-217362 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-217362 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-217362 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-217362 --alsologtostderr -v=5: (2.730824928s)
--- PASS: TestPause/serial/DeletePaused (2.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.285089833s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-217362
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-217362: exit status 1 (16.345387ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-217362: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.178758399s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.611811067s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ndg9s" [9e3c4201-4a24-43e2-ac7f-be274d5dde87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ndg9s" [9e3c4201-4a24-43e2-ac7f-be274d5dde87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004261244s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pv88n" [bcecf5be-5f3a-460d-a773-753690487361] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004072138s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8cz7n" [0141b949-bc67-4469-bec4-86d63a32e472] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8cz7n" [0141b949-bc67-4469-bec4-86d63a32e472] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004736029s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0814 16:51:05.704132   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.060308589s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.435655361s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-c699v" [7e6a6ec7-3b92-4d3a-837f-fd78ce8ca528] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004366845s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xsjjd" [2e62bdf9-1878-4e5d-acde-39e903681910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xsjjd" [2e62bdf9-1878-4e5d-acde-39e903681910] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.007825975s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m3.534853721s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xz4pz" [3c9c825d-671d-4b7b-aaf8-cd5afaef3ba7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xz4pz" [3c9c825d-671d-4b7b-aaf8-cd5afaef3ba7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00401233s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.095368731s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-228979 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m12.922051068s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (141.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-737159 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-737159 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m21.889686607s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (141.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fv8f9" [645b779e-da3c-4067-bbb6-53517fba3e44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fv8f9" [645b779e-da3c-4067-bbb6-53517fba3e44] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003961518s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bsbrp" [b97607ed-26c0-4bde-89db-cf47c7161b4d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004154174s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8qfvc" [6da7fd66-7051-4ba6-9423-80b8561ff6f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8qfvc" [6da7fd66-7051-4ba6-9423-80b8561ff6f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004136515s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-647363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-647363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m2.721653199s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-228979 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-934009 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-934009 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (49.821716803s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-228979 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bnkkf" [e6215d59-54ac-4c54-ba58-552dddb6f417] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bnkkf" [e6215d59-54ac-4c54-ba58-552dddb6f417] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004234209s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-228979 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-228979 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E0814 16:56:54.309848   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.826012   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.832457   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.843867   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.865225   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.906654   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:56.988157   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:57.149661   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:57.471540   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:58.113493   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:59.395638   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:01.957554   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:02.256434   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:07.079469   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.217956   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.224371   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.235754   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.257133   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.298510   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.379912   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.541452   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:08.863185   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:09.505169   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:10.787227   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:13.348723   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:17.321244   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:18.470071   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:20.771028   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:28.711902   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:37.803255   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:57:49.194134   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.478053   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.484431   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.495774   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.517123   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.558710   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.640344   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:09.801886   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:10.123558   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:10.765724   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.450753   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.457148   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.468573   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.490035   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.531457   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.612867   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:11.774301   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:12.047907   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:12.096334   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:12.737591   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:14.019740   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:14.610276   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:16.231693   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:16.581187   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:18.765173   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:19.731898   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:21.703202   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:24.178120   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:29.974283   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:30.155971   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/custom-flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:31.944719   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.602462   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.608904   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.620372   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.641849   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.683244   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.764648   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:47.926206   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:48.247714   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:48.889415   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:50.171735   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:50.456584   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:52.426650   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:52.733206   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:58:57.854945   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:59:08.097078   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:59:08.769721   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:59:28.578481   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:59:31.417871   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:59:33.388405   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-166953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-166953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (29.298171498s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-934009 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [986a8a1e-720a-4c31-a042-57e0906d2c04] Pending
helpers_test.go:344: "busybox" [986a8a1e-720a-4c31-a042-57e0906d2c04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [986a8a1e-720a-4c31-a042-57e0906d2c04] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004308513s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-934009 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-647363 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d2be34a-1a90-425c-9828-de0fbfef64d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d2be34a-1a90-425c-9828-de0fbfef64d8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003155518s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-647363 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-934009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-934009 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-166953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-934009 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-934009 --alsologtostderr -v=3: (11.890741952s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-166953 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-166953 --alsologtostderr -v=3: (1.186115948s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-166953 -n newest-cni-166953
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-166953 -n newest-cni-166953: exit status 7 (62.951707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-166953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-166953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-166953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.312717303s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-166953 -n newest-cni-166953
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-647363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-647363 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-647363 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-647363 --alsologtostderr -v=3: (11.962906915s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009: exit status 7 (75.673658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-934009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-934009 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-934009 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m38.522690474s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-166953 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-166953 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-166953 -n newest-cni-166953
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-166953 -n newest-cni-166953: exit status 2 (277.814378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-166953 -n newest-cni-166953
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-166953 -n newest-cni-166953: exit status 2 (284.434874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-166953 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-166953 -n newest-cni-166953
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-166953 -n newest-cni-166953
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-737159 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dca1cf37-1ffd-446d-8b44-8fae4bf94c3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dca1cf37-1ffd-446d-8b44-8fae4bf94c3b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004351147s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-737159 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-647363 -n no-preload-647363
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-647363 -n no-preload-647363: exit status 7 (77.856071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-647363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-647363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-647363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m0.704088654s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-647363 -n no-preload-647363
E0814 17:00:06.147662   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/old-k8s-version-737159/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-472566 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-472566 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (48.809209594s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-737159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-737159 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-737159 --alsologtostderr -v=3
E0814 16:55:23.837015   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/addons-146898/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-737159 --alsologtostderr -v=3: (13.421414305s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737159 -n old-k8s-version-737159
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737159 -n old-k8s-version-737159: exit status 7 (61.660712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-737159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (28.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-737159 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0814 16:55:32.369994   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.376316   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.387624   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.409526   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.451689   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.533149   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:32.695402   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:33.017243   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:33.659105   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:34.941226   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:37.503082   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.316941   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.323358   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.334960   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.356866   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.398635   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.480689   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.642945   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:40.964446   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:41.606041   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:42.624924   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:42.887936   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:45.449517   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:50.570883   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:55:52.866448   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-737159 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (27.927141577s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737159 -n old-k8s-version-737159
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (28.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-472566 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [acfb599a-88c6-4b8f-a984-ee42bc457b85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [acfb599a-88c6-4b8f-a984-ee42bc457b85] Running
E0814 16:56:00.812661   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004163422s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-472566 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd2rl" [be8db26a-25d3-45b3-b646-7730c6ce5bf9] Pending
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd2rl" [be8db26a-25d3-45b3-b646-7730c6ce5bf9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0814 16:56:13.348106   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/auto-228979/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd2rl" [be8db26a-25d3-45b3-b646-7730c6ce5bf9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 27.003080514s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (27.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-472566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0814 16:56:05.703835   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-472566 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-472566 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-472566 --alsologtostderr -v=3: (12.993158249s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472566 -n embed-certs-472566
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472566 -n embed-certs-472566: exit status 7 (66.831159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-472566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-472566 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 16:56:21.294172   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/kindnet-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-472566 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m22.081704874s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472566 -n embed-certs-472566
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd2rl" [be8db26a-25d3-45b3-b646-7730c6ce5bf9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003770077s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-737159 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-737159 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-737159 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737159 -n old-k8s-version-737159
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737159 -n old-k8s-version-737159: exit status 2 (330.412141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737159 -n old-k8s-version-737159
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737159 -n old-k8s-version-737159: exit status 2 (334.022973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-737159 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737159 -n old-k8s-version-737159
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737159 -n old-k8s-version-737159
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9zq59" [fed635ef-8e99-4e4c-b31b-2d6490efe5cb] Running
E0814 16:59:40.686807   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/calico-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004229412s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9zq59" [fed635ef-8e99-4e4c-b31b-2d6490efe5cb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003593157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-934009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-934009 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-934009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009: exit status 2 (277.445146ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009: exit status 2 (290.189002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-934009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934009 -n default-k8s-diff-port-934009
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c242k" [113b3f26-f99a-45d6-9e4f-7cd8d6974e74] Running
E0814 17:00:08.709446   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/old-k8s-version-737159/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:00:09.540198   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/bridge-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004244251s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c242k" [113b3f26-f99a-45d6-9e4f-7cd8d6974e74] Running
E0814 17:00:13.831081   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/old-k8s-version-737159/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003780418s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-647363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-647363 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-647363 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-647363 -n no-preload-647363
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-647363 -n no-preload-647363: exit status 2 (277.033977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-647363 -n no-preload-647363
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-647363 -n no-preload-647363: exit status 2 (275.89707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-647363 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-647363 -n no-preload-647363
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-647363 -n no-preload-647363
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h6mvs" [3e5ffbe4-def9-4022-89c9-b97104eabe78] Running
E0814 17:00:44.555346   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/old-k8s-version-737159/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003097309s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h6mvs" [3e5ffbe4-def9-4022-89c9-b97104eabe78] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003658528s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-472566 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-472566 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-472566 --alsologtostderr -v=1
E0814 17:00:53.339782   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/enable-default-cni-228979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472566 -n embed-certs-472566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472566 -n embed-certs-472566: exit status 2 (293.958897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-472566 -n embed-certs-472566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-472566 -n embed-certs-472566: exit status 2 (283.814505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-472566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472566 -n embed-certs-472566
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-472566 -n embed-certs-472566
E0814 17:00:55.309983   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/flannel-228979/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-228979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-228979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228979"

                                                
                                                
----------------------- debugLogs end: kubenet-228979 [took: 3.571651932s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-228979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-228979
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0814 16:46:05.703222   20599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13813/.minikube/profiles/functional-712264/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: cilium-228979 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-228979" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-228979

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-228979" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228979"

                                                
                                                
----------------------- debugLogs end: cilium-228979 [took: 3.537320992s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-228979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-228979
--- SKIP: TestNetworkPlugins/group/cilium (3.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-221983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-221983
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard