Test Report: Docker_Linux_crio 19479

                    
                      913baf54a454bfbef3be1ea09a51779f85ec9369:2024-08-19:35854
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 154.92
36 TestAddons/parallel/MetricsServer 318.62
x
+
TestAddons/parallel/Ingress (154.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-010148 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-010148 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-010148 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [78867595-4e60-4d6b-be17-ee5eb5f34fa0] Pending
helpers_test.go:344: "nginx" [78867595-4e60-4d6b-be17-ee5eb5f34fa0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [78867595-4e60-4d6b-be17-ee5eb5f34fa0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.00350594s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-010148 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.12775878s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-010148 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable ingress-dns --alsologtostderr -v=1: (1.09457774s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable ingress --alsologtostderr -v=1: (7.614371976s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-010148
helpers_test.go:235: (dbg) docker inspect addons-010148:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd",
	        "Created": "2024-08-19T11:57:43.276181556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 86015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T11:57:43.400361499Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/hosts",
	        "LogPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd-json.log",
	        "Name": "/addons-010148",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-010148:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-010148",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4-init/diff:/var/lib/docker/overlay2/3c736a112b0015011dd3f0c044c902fbcf6dfb1fd861cd8c6e5619934cdeaf76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-010148",
	                "Source": "/var/lib/docker/volumes/addons-010148/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-010148",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-010148",
	                "name.minikube.sigs.k8s.io": "addons-010148",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54bd52e3edcad6d1addd31a7129b5043f9056e4a167e024fd3973abd56f95696",
	            "SandboxKey": "/var/run/docker/netns/54bd52e3edca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-010148": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22d92805fbe4e1d7aab9b57cc9bfee25f02e9c623b5b865a3e3b744ff69af499",
	                    "EndpointID": "3f0b382f618c6cba716c611d44b75bfda8e6e3022bd80283ad2ad3665ea0e745",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-010148",
	                        "0ade25f8970d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-010148 -n addons-010148
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 logs -n 25: (1.091553876s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-335603 | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | download-docker-335603                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-335603                                                                   | download-docker-335603 | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-139959   | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | binary-mirror-139959                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45177                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-139959                                                                     | binary-mirror-139959   | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-010148 --wait=true                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 12:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| ip      | addons-010148 ip                                                                            | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | -p addons-010148                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | -p addons-010148                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-010148 ssh cat                                                                       | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | /opt/local-path-provisioner/pvc-520035d6-e6c6-424a-94a4-de8464c48f46_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-010148 addons                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-010148 ssh curl -s                                                                   | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-010148 ip                                                                            | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:03 UTC |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:57:21
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:57:21.136622   85279 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:21.136750   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:21.136760   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:21.136766   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:21.136998   85279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 11:57:21.137687   85279 out.go:352] Setting JSON to false
	I0819 11:57:21.138582   85279 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5936,"bootTime":1724062705,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:57:21.138646   85279 start.go:139] virtualization: kvm guest
	I0819 11:57:21.140615   85279 out.go:177] * [addons-010148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:57:21.141875   85279 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 11:57:21.141892   85279 notify.go:220] Checking for updates...
	I0819 11:57:21.144168   85279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:21.145514   85279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:57:21.146717   85279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 11:57:21.147965   85279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:57:21.149077   85279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:21.150501   85279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:57:21.171324   85279 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:57:21.171432   85279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:21.215090   85279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:21.206969249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:21.215188   85279 docker.go:307] overlay module found
	I0819 11:57:21.216780   85279 out.go:177] * Using the docker driver based on user configuration
	I0819 11:57:21.217937   85279 start.go:297] selected driver: docker
	I0819 11:57:21.217959   85279 start.go:901] validating driver "docker" against <nil>
	I0819 11:57:21.217971   85279 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:21.218690   85279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:21.264361   85279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:21.254621157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:21.264552   85279 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:57:21.264753   85279 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:21.266335   85279 out.go:177] * Using Docker driver with root privileges
	I0819 11:57:21.267669   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:21.267687   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:21.267697   85279 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:57:21.267785   85279 start.go:340] cluster config:
	{Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:21.269229   85279 out.go:177] * Starting "addons-010148" primary control-plane node in "addons-010148" cluster
	I0819 11:57:21.270474   85279 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 11:57:21.271620   85279 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:57:21.272645   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:21.272678   85279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:57:21.272685   85279 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:21.272737   85279 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:57:21.272755   85279 preload.go:172] Found /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:57:21.272763   85279 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:57:21.273123   85279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json ...
	I0819 11:57:21.273153   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json: {Name:mk4719226a7e3df11c1f16a79f661e044f3c1059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:21.288181   85279 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:57:21.288324   85279 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:57:21.288345   85279 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 11:57:21.288356   85279 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 11:57:21.288369   85279 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 11:57:21.288380   85279 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 11:57:33.216374   85279 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 11:57:33.216425   85279 cache.go:194] Successfully downloaded all kic artifacts
	I0819 11:57:33.216468   85279 start.go:360] acquireMachinesLock for addons-010148: {Name:mk39b43a3047408d13d6bdd6d56728f128387755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:33.216569   85279 start.go:364] duration metric: took 78.486µs to acquireMachinesLock for "addons-010148"
	I0819 11:57:33.216593   85279 start.go:93] Provisioning new machine with config: &{Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:57:33.216721   85279 start.go:125] createHost starting for "" (driver="docker")
	I0819 11:57:33.218570   85279 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 11:57:33.218901   85279 start.go:159] libmachine.API.Create for "addons-010148" (driver="docker")
	I0819 11:57:33.218949   85279 client.go:168] LocalClient.Create starting
	I0819 11:57:33.219051   85279 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem
	I0819 11:57:33.329753   85279 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem
	I0819 11:57:33.652634   85279 cli_runner.go:164] Run: docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 11:57:33.667939   85279 cli_runner.go:211] docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 11:57:33.668026   85279 network_create.go:284] running [docker network inspect addons-010148] to gather additional debugging logs...
	I0819 11:57:33.668050   85279 cli_runner.go:164] Run: docker network inspect addons-010148
	W0819 11:57:33.684360   85279 cli_runner.go:211] docker network inspect addons-010148 returned with exit code 1
	I0819 11:57:33.684410   85279 network_create.go:287] error running [docker network inspect addons-010148]: docker network inspect addons-010148: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-010148 not found
	I0819 11:57:33.684424   85279 network_create.go:289] output of [docker network inspect addons-010148]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-010148 not found
	
	** /stderr **
	I0819 11:57:33.684559   85279 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:57:33.700349   85279 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001af0780}
	I0819 11:57:33.700391   85279 network_create.go:124] attempt to create docker network addons-010148 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 11:57:33.700434   85279 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-010148 addons-010148
	I0819 11:57:33.759438   85279 network_create.go:108] docker network addons-010148 192.168.49.0/24 created
	I0819 11:57:33.759477   85279 kic.go:121] calculated static IP "192.168.49.2" for the "addons-010148" container
	I0819 11:57:33.759548   85279 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 11:57:33.774214   85279 cli_runner.go:164] Run: docker volume create addons-010148 --label name.minikube.sigs.k8s.io=addons-010148 --label created_by.minikube.sigs.k8s.io=true
	I0819 11:57:33.790735   85279 oci.go:103] Successfully created a docker volume addons-010148
	I0819 11:57:33.790820   85279 cli_runner.go:164] Run: docker run --rm --name addons-010148-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --entrypoint /usr/bin/test -v addons-010148:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 11:57:38.771046   85279 cli_runner.go:217] Completed: docker run --rm --name addons-010148-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --entrypoint /usr/bin/test -v addons-010148:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (4.980166693s)
	I0819 11:57:38.771079   85279 oci.go:107] Successfully prepared a docker volume addons-010148
	I0819 11:57:38.771098   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:38.771122   85279 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 11:57:38.771175   85279 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-010148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 11:57:43.216217   85279 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-010148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.444984551s)
	I0819 11:57:43.216249   85279 kic.go:203] duration metric: took 4.445124389s to extract preloaded images to volume ...
	W0819 11:57:43.216377   85279 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 11:57:43.216470   85279 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 11:57:43.262225   85279 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-010148 --name addons-010148 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-010148 --network addons-010148 --ip 192.168.49.2 --volume addons-010148:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 11:57:43.559161   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Running}}
	I0819 11:57:43.576723   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.595143   85279 cli_runner.go:164] Run: docker exec addons-010148 stat /var/lib/dpkg/alternatives/iptables
	I0819 11:57:43.642299   85279 oci.go:144] the created container "addons-010148" has a running status.
	I0819 11:57:43.642351   85279 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa...
	I0819 11:57:43.764017   85279 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 11:57:43.783776   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.802680   85279 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 11:57:43.802701   85279 kic_runner.go:114] Args: [docker exec --privileged addons-010148 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 11:57:43.847341   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.867122   85279 machine.go:93] provisionDockerMachine start ...
	I0819 11:57:43.867252   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:43.891643   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:43.891915   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:43.891958   85279 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:57:43.892684   85279 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49208->127.0.0.1:32768: read: connection reset by peer
	I0819 11:57:47.013311   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-010148
	
	I0819 11:57:47.013360   85279 ubuntu.go:169] provisioning hostname "addons-010148"
	I0819 11:57:47.013428   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.029891   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.030125   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.030141   85279 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-010148 && echo "addons-010148" | sudo tee /etc/hostname
	I0819 11:57:47.156889   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-010148
	
	I0819 11:57:47.156961   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.173004   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.173178   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.173194   85279 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-010148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-010148/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-010148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:57:47.289902   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:47.289943   85279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-77145/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-77145/.minikube}
	I0819 11:57:47.289972   85279 ubuntu.go:177] setting up certificates
	I0819 11:57:47.289994   85279 provision.go:84] configureAuth start
	I0819 11:57:47.290065   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:47.306442   85279 provision.go:143] copyHostCerts
	I0819 11:57:47.306512   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/key.pem (1675 bytes)
	I0819 11:57:47.306616   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/ca.pem (1078 bytes)
	I0819 11:57:47.306680   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/cert.pem (1123 bytes)
	I0819 11:57:47.306740   85279 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem org=jenkins.addons-010148 san=[127.0.0.1 192.168.49.2 addons-010148 localhost minikube]
	I0819 11:57:47.397769   85279 provision.go:177] copyRemoteCerts
	I0819 11:57:47.397833   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:57:47.397892   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.414888   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:47.502260   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:57:47.523404   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:57:47.543931   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 11:57:47.564286   85279 provision.go:87] duration metric: took 274.271486ms to configureAuth
	I0819 11:57:47.564314   85279 ubuntu.go:193] setting minikube options for container-runtime
	I0819 11:57:47.564500   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:47.564614   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.580919   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.581105   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.581120   85279 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:57:47.782340   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:57:47.782366   85279 machine.go:96] duration metric: took 3.915219106s to provisionDockerMachine
	I0819 11:57:47.782397   85279 client.go:171] duration metric: took 14.563420774s to LocalClient.Create
	I0819 11:57:47.782435   85279 start.go:167] duration metric: took 14.563537451s to libmachine.API.Create "addons-010148"
	I0819 11:57:47.782449   85279 start.go:293] postStartSetup for "addons-010148" (driver="docker")
	I0819 11:57:47.782462   85279 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:57:47.782525   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:57:47.782566   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.798973   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:47.886333   85279 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:57:47.889344   85279 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 11:57:47.889371   85279 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 11:57:47.889380   85279 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 11:57:47.889387   85279 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 11:57:47.889397   85279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-77145/.minikube/addons for local assets ...
	I0819 11:57:47.889457   85279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-77145/.minikube/files for local assets ...
	I0819 11:57:47.889481   85279 start.go:296] duration metric: took 107.026975ms for postStartSetup
	I0819 11:57:47.889741   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:47.905971   85279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json ...
	I0819 11:57:47.906208   85279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:57:47.906249   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.922551   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.006682   85279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 11:57:48.010676   85279 start.go:128] duration metric: took 14.793934489s to createHost
	I0819 11:57:48.010702   85279 start.go:83] releasing machines lock for "addons-010148", held for 14.794120907s
	I0819 11:57:48.010769   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:48.026566   85279 ssh_runner.go:195] Run: cat /version.json
	I0819 11:57:48.026623   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:48.026654   85279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:57:48.026798   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:48.043954   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.044151   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.125687   85279 ssh_runner.go:195] Run: systemctl --version
	I0819 11:57:48.129726   85279 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:57:48.265379   85279 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 11:57:48.269641   85279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:48.286470   85279 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 11:57:48.286555   85279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:48.311241   85279 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 11:57:48.311272   85279 start.go:495] detecting cgroup driver to use...
	I0819 11:57:48.311306   85279 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 11:57:48.311352   85279 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:57:48.324726   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:57:48.334309   85279 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:57:48.334357   85279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:57:48.346514   85279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:57:48.359243   85279 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:57:48.439099   85279 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:57:48.519943   85279 docker.go:233] disabling docker service ...
	I0819 11:57:48.520017   85279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:57:48.537984   85279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:57:48.548220   85279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:57:48.626622   85279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:57:48.712024   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:57:48.722192   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:57:48.736585   85279 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:57:48.736647   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.744908   85279 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:57:48.744975   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.753248   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.761586   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.770214   85279 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:57:48.778565   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.787708   85279 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.801481   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.809887   85279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:57:48.816928   85279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:57:48.824306   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:48.898080   85279 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:57:48.997480   85279 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:57:48.997544   85279 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:57:49.000800   85279 start.go:563] Will wait 60s for crictl version
	I0819 11:57:49.000855   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:57:49.003889   85279 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:57:49.035659   85279 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 11:57:49.035743   85279 ssh_runner.go:195] Run: crio --version
	I0819 11:57:49.068930   85279 ssh_runner.go:195] Run: crio --version
	I0819 11:57:49.104451   85279 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 11:57:49.105924   85279 cli_runner.go:164] Run: docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:57:49.121695   85279 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 11:57:49.125297   85279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:49.135128   85279 kubeadm.go:883] updating cluster {Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:57:49.135256   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:49.135300   85279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:49.199167   85279 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:49.199191   85279 crio.go:433] Images already preloaded, skipping extraction
	I0819 11:57:49.199239   85279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:49.230441   85279 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:49.230464   85279 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:57:49.230473   85279 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 11:57:49.230567   85279 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-010148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:57:49.230631   85279 ssh_runner.go:195] Run: crio config
	I0819 11:57:49.270887   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:49.270912   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:49.270927   85279 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:57:49.270964   85279 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-010148 NodeName:addons-010148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:57:49.271131   85279 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-010148"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:57:49.271209   85279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:57:49.279366   85279 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:57:49.279426   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:57:49.287416   85279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 11:57:49.303403   85279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:57:49.319344   85279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 11:57:49.335392   85279 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 11:57:49.338498   85279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:49.348268   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:49.423349   85279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:49.435952   85279 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148 for IP: 192.168.49.2
	I0819 11:57:49.435972   85279 certs.go:194] generating shared ca certs ...
	I0819 11:57:49.435990   85279 certs.go:226] acquiring lock for ca certs: {Name:mkba49214281fce7ee45fe1d9fdbc484fa0bf44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.436110   85279 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key
	I0819 11:57:49.496065   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt ...
	I0819 11:57:49.496094   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt: {Name:mk6262b0d88ceffd2b2b4bc4c54db54d0ae61c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.496260   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key ...
	I0819 11:57:49.496272   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key: {Name:mk27397098351b2ea59af7f0894194f89474b2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.496381   85279 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key
	I0819 11:57:49.628544   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt ...
	I0819 11:57:49.628576   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt: {Name:mk1216eab57117a403bfe709a4830a59d446e833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.628747   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key ...
	I0819 11:57:49.628757   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key: {Name:mkc45b17cf30c56bbb27a361cd5ecffecdf5065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.628824   85279 certs.go:256] generating profile certs ...
	I0819 11:57:49.628882   85279 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key
	I0819 11:57:49.628895   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt with IP's: []
	I0819 11:57:49.781863   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt ...
	I0819 11:57:49.781901   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: {Name:mk3e0630afee9742ed77d78f3e4835528ac4ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.782088   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key ...
	I0819 11:57:49.782099   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key: {Name:mk6a306d20eeaacca346fea41bb9221251b42896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.782172   85279 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35
	I0819 11:57:49.782190   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 11:57:50.079427   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 ...
	I0819 11:57:50.079459   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35: {Name:mk3ae81cc233bbe6b0a93138939ebda0aa2e0358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.079619   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35 ...
	I0819 11:57:50.079635   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35: {Name:mk6b84dd14beae1eedc22ccb7cae1e000ce51c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.079706   85279 certs.go:381] copying /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 -> /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt
	I0819 11:57:50.079777   85279 certs.go:385] copying /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35 -> /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key
	I0819 11:57:50.079823   85279 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key
	I0819 11:57:50.079837   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt with IP's: []
	I0819 11:57:50.189323   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt ...
	I0819 11:57:50.189354   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt: {Name:mkcacef18d5ca1277820725073144a71c6a38986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.189524   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key ...
	I0819 11:57:50.189534   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key: {Name:mkfd5d83dbc372bad43edd8cce16667ad0eca786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.189699   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 11:57:50.189733   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem (1078 bytes)
	I0819 11:57:50.189759   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:57:50.189783   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem (1675 bytes)
	I0819 11:57:50.190418   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:57:50.213394   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:57:50.236665   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:57:50.257595   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:57:50.278202   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 11:57:50.298740   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:57:50.318777   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:57:50.339057   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:57:50.359644   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:57:50.380625   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:57:50.396065   85279 ssh_runner.go:195] Run: openssl version
	I0819 11:57:50.400845   85279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:57:50.409352   85279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.412351   85279 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.412397   85279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.418550   85279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:57:50.426499   85279 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:57:50.429312   85279 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:57:50.429358   85279 kubeadm.go:392] StartCluster: {Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:50.429434   85279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:57:50.429469   85279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:57:50.461670   85279 cri.go:89] found id: ""
	I0819 11:57:50.461728   85279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:57:50.469688   85279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:57:50.477425   85279 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 11:57:50.477479   85279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:57:50.485041   85279 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:57:50.485058   85279 kubeadm.go:157] found existing configuration files:
	
	I0819 11:57:50.485103   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:57:50.492686   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:57:50.492727   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:57:50.500025   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:57:50.507776   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:57:50.507823   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:57:50.515035   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:57:50.522812   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:57:50.522867   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:57:50.530313   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:57:50.537588   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:57:50.537639   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:57:50.544852   85279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 11:57:50.577292   85279 kubeadm.go:310] W0819 11:57:50.576512    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:50.577806   85279 kubeadm.go:310] W0819 11:57:50.577287    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:50.595300   85279 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 11:57:50.643330   85279 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:57:59.131667   85279 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:57:59.131741   85279 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:57:59.131856   85279 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 11:57:59.131963   85279 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 11:57:59.132000   85279 kubeadm.go:310] OS: Linux
	I0819 11:57:59.132066   85279 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 11:57:59.132130   85279 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 11:57:59.132198   85279 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 11:57:59.132263   85279 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 11:57:59.132329   85279 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 11:57:59.132424   85279 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 11:57:59.132489   85279 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 11:57:59.132534   85279 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 11:57:59.132597   85279 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 11:57:59.132693   85279 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:57:59.132834   85279 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:57:59.132977   85279 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:57:59.133073   85279 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:57:59.134942   85279 out.go:235]   - Generating certificates and keys ...
	I0819 11:57:59.135018   85279 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:57:59.135075   85279 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:57:59.135144   85279 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:57:59.135190   85279 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:57:59.135258   85279 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:57:59.135326   85279 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:57:59.135394   85279 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:57:59.135518   85279 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-010148 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:57:59.135592   85279 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:57:59.135726   85279 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-010148 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:57:59.135783   85279 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:57:59.135835   85279 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:57:59.135873   85279 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:57:59.135916   85279 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:57:59.135957   85279 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:57:59.136005   85279 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:57:59.136050   85279 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:57:59.136101   85279 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:57:59.136176   85279 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:57:59.136258   85279 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:57:59.136331   85279 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:57:59.137794   85279 out.go:235]   - Booting up control plane ...
	I0819 11:57:59.137896   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:57:59.137969   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:57:59.138025   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:57:59.138108   85279 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:57:59.138183   85279 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:57:59.138222   85279 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:57:59.138332   85279 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:57:59.138426   85279 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:57:59.138475   85279 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.774421ms
	I0819 11:57:59.138535   85279 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:57:59.138591   85279 kubeadm.go:310] [api-check] The API server is healthy after 4.502162972s
	I0819 11:57:59.138719   85279 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:57:59.138876   85279 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:57:59.138937   85279 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:57:59.139082   85279 kubeadm.go:310] [mark-control-plane] Marking the node addons-010148 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:57:59.139136   85279 kubeadm.go:310] [bootstrap-token] Using token: ivphnl.4siv2zo7antv26ew
	I0819 11:57:59.140646   85279 out.go:235]   - Configuring RBAC rules ...
	I0819 11:57:59.140750   85279 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:57:59.140819   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:57:59.140980   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:57:59.141191   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:57:59.141357   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:57:59.141488   85279 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:57:59.141653   85279 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:57:59.141692   85279 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:57:59.141731   85279 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:57:59.141740   85279 kubeadm.go:310] 
	I0819 11:57:59.141794   85279 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:57:59.141801   85279 kubeadm.go:310] 
	I0819 11:57:59.141889   85279 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:57:59.141909   85279 kubeadm.go:310] 
	I0819 11:57:59.141950   85279 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:57:59.142022   85279 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:57:59.142084   85279 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:57:59.142094   85279 kubeadm.go:310] 
	I0819 11:57:59.142166   85279 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:57:59.142175   85279 kubeadm.go:310] 
	I0819 11:57:59.142241   85279 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:57:59.142250   85279 kubeadm.go:310] 
	I0819 11:57:59.142316   85279 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:57:59.142378   85279 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:57:59.142444   85279 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:57:59.142453   85279 kubeadm.go:310] 
	I0819 11:57:59.142520   85279 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:57:59.142588   85279 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:57:59.142597   85279 kubeadm.go:310] 
	I0819 11:57:59.142666   85279 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivphnl.4siv2zo7antv26ew \
	I0819 11:57:59.142752   85279 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbed9ee220740d41455e00aa7089abcb0e7d638dbb25406c98dd05f5405a9fed \
	I0819 11:57:59.142785   85279 kubeadm.go:310] 	--control-plane 
	I0819 11:57:59.142794   85279 kubeadm.go:310] 
	I0819 11:57:59.142910   85279 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:57:59.142929   85279 kubeadm.go:310] 
	I0819 11:57:59.142999   85279 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivphnl.4siv2zo7antv26ew \
	I0819 11:57:59.143154   85279 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbed9ee220740d41455e00aa7089abcb0e7d638dbb25406c98dd05f5405a9fed 
	I0819 11:57:59.143172   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:59.143183   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:59.144692   85279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 11:57:59.146040   85279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 11:57:59.150121   85279 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 11:57:59.150147   85279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 11:57:59.166393   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 11:57:59.350258   85279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:57:59.350363   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:59.350408   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-010148 minikube.k8s.io/updated_at=2024_08_19T11_57_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=addons-010148 minikube.k8s.io/primary=true
	I0819 11:57:59.357992   85279 ops.go:34] apiserver oom_adj: -16
	I0819 11:57:59.457588   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:59.957661   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:00.457681   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:00.957619   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:01.457824   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:01.957982   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:02.458209   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:02.958396   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:03.458453   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:03.523537   85279 kubeadm.go:1113] duration metric: took 4.173248662s to wait for elevateKubeSystemPrivileges
	I0819 11:58:03.523573   85279 kubeadm.go:394] duration metric: took 13.094219652s to StartCluster
	I0819 11:58:03.523602   85279 settings.go:142] acquiring lock: {Name:mk516bc3d1226b2b31d897fcb99c3d41b4827cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:58:03.523746   85279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:58:03.524179   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/kubeconfig: {Name:mk37d44a49445dbad6d9c9218733c895ba35a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:58:03.524402   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:58:03.524405   85279 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:58:03.524630   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:58:03.524573   85279 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 11:58:03.524688   85279 addons.go:69] Setting default-storageclass=true in profile "addons-010148"
	I0819 11:58:03.524712   85279 addons.go:69] Setting yakd=true in profile "addons-010148"
	I0819 11:58:03.524742   85279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-010148"
	I0819 11:58:03.524753   85279 addons.go:234] Setting addon yakd=true in "addons-010148"
	I0819 11:58:03.524741   85279 addons.go:69] Setting metrics-server=true in profile "addons-010148"
	I0819 11:58:03.524765   85279 addons.go:69] Setting storage-provisioner=true in profile "addons-010148"
	I0819 11:58:03.524787   85279 addons.go:234] Setting addon metrics-server=true in "addons-010148"
	I0819 11:58:03.524794   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524798   85279 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-010148"
	I0819 11:58:03.524816   85279 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-010148"
	I0819 11:58:03.524808   85279 addons.go:69] Setting cloud-spanner=true in profile "addons-010148"
	I0819 11:58:03.524822   85279 addons.go:69] Setting ingress=true in profile "addons-010148"
	I0819 11:58:03.524839   85279 addons.go:69] Setting ingress-dns=true in profile "addons-010148"
	I0819 11:58:03.524849   85279 addons.go:234] Setting addon cloud-spanner=true in "addons-010148"
	I0819 11:58:03.524849   85279 addons.go:69] Setting registry=true in profile "addons-010148"
	I0819 11:58:03.524855   85279 addons.go:234] Setting addon ingress=true in "addons-010148"
	I0819 11:58:03.524859   85279 addons.go:234] Setting addon ingress-dns=true in "addons-010148"
	I0819 11:58:03.524869   85279 addons.go:234] Setting addon registry=true in "addons-010148"
	I0819 11:58:03.524860   85279 addons.go:69] Setting inspektor-gadget=true in profile "addons-010148"
	I0819 11:58:03.524884   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524890   85279 addons.go:69] Setting gcp-auth=true in profile "addons-010148"
	I0819 11:58:03.524893   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524830   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524907   85279 mustload.go:65] Loading cluster: addons-010148
	I0819 11:58:03.524906   85279 addons.go:234] Setting addon inspektor-gadget=true in "addons-010148"
	I0819 11:58:03.524975   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.525067   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:58:03.525116   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525138   85279 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-010148"
	I0819 11:58:03.525204   85279 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-010148"
	I0819 11:58:03.525244   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.525299   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525328   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525347   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525359   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525399   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525474   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.524840   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.526267   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.524884   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524893   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.526609   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.527563   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.527913   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.528535   85279 out.go:177] * Verifying Kubernetes components...
	I0819 11:58:03.526628   85279 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-010148"
	I0819 11:58:03.528930   85279 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-010148"
	I0819 11:58:03.526642   85279 addons.go:69] Setting volcano=true in profile "addons-010148"
	I0819 11:58:03.529002   85279 addons.go:234] Setting addon volcano=true in "addons-010148"
	I0819 11:58:03.529047   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524790   85279 addons.go:234] Setting addon storage-provisioner=true in "addons-010148"
	I0819 11:58:03.529158   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.529506   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.529600   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.526728   85279 addons.go:69] Setting helm-tiller=true in profile "addons-010148"
	I0819 11:58:03.530067   85279 addons.go:234] Setting addon helm-tiller=true in "addons-010148"
	I0819 11:58:03.530105   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.530188   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.526815   85279 addons.go:69] Setting volumesnapshots=true in profile "addons-010148"
	I0819 11:58:03.530675   85279 addons.go:234] Setting addon volumesnapshots=true in "addons-010148"
	I0819 11:58:03.530837   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.530771   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:58:03.552220   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.554021   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.570872   85279 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 11:58:03.571967   85279 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 11:58:03.571990   85279 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 11:58:03.572085   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.575871   85279 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 11:58:03.576063   85279 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 11:58:03.577137   85279 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:58:03.577156   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 11:58:03.577211   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.577495   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 11:58:03.577511   85279 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 11:58:03.577573   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	W0819 11:58:03.581707   85279 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 11:58:03.594551   85279 addons.go:234] Setting addon default-storageclass=true in "addons-010148"
	I0819 11:58:03.594598   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.595148   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.595785   85279 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 11:58:03.596016   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.597209   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 11:58:03.598552   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 11:58:03.599543   85279 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 11:58:03.600447   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 11:58:03.600555   85279 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 11:58:03.600571   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 11:58:03.600629   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.603590   85279 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 11:58:03.603761   85279 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 11:58:03.603821   85279 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 11:58:03.605008   85279 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:58:03.605028   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 11:58:03.605081   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.605590   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 11:58:03.605609   85279 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 11:58:03.605662   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.606016   85279 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 11:58:03.606032   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 11:58:03.606080   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.606103   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 11:58:03.607709   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:03.607866   85279 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 11:58:03.608909   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:03.609097   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 11:58:03.609110   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 11:58:03.609154   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.609374   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 11:58:03.610324   85279 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:58:03.610341   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 11:58:03.610383   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.612776   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 11:58:03.614117   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 11:58:03.615320   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 11:58:03.616424   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 11:58:03.617474   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 11:58:03.617500   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 11:58:03.617557   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.632178   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.638867   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.642802   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.644942   85279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:58:03.645013   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 11:58:03.645377   85279 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-010148"
	I0819 11:58:03.645418   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.645900   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.652367   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 11:58:03.652416   85279 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 11:58:03.652492   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.653131   85279 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:58:03.653158   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:58:03.653219   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.663852   85279 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:58:03.663875   85279 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:58:03.664017   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.664342   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.677286   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.679523   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.679903   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.685518   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.692439   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:58:03.694560   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.697743   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.700395   85279 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 11:58:03.701703   85279 out.go:177]   - Using image docker.io/busybox:stable
	I0819 11:58:03.703308   85279 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:58:03.703328   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 11:58:03.703379   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.708456   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.713610   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.714424   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.720144   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.753005   85279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:58:04.044023   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 11:58:04.044053   85279 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 11:58:04.048409   85279 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 11:58:04.048526   85279 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 11:58:04.053674   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:58:04.143290   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 11:58:04.143326   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 11:58:04.144183   85279 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 11:58:04.144204   85279 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 11:58:04.163520   85279 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 11:58:04.163549   85279 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 11:58:04.248398   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:58:04.248626   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 11:58:04.248641   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 11:58:04.252912   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 11:58:04.252939   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 11:58:04.256803   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 11:58:04.343462   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:58:04.354363   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:58:04.355468   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 11:58:04.355492   85279 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 11:58:04.358075   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 11:58:04.358098   85279 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 11:58:04.360111   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 11:58:04.360134   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 11:58:04.446959   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:58:04.452287   85279 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 11:58:04.452364   85279 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 11:58:04.458850   85279 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:58:04.458924   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 11:58:04.459795   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:58:04.460304   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 11:58:04.460370   85279 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 11:58:04.462906   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 11:58:04.462944   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 11:58:04.556245   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:58:04.556297   85279 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 11:58:04.663727   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 11:58:04.663814   85279 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 11:58:04.667093   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:58:04.667115   85279 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 11:58:04.743367   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 11:58:04.743458   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 11:58:04.762675   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:58:04.854848   85279 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 11:58:04.854968   85279 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 11:58:04.863616   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 11:58:04.863741   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 11:58:04.952241   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:58:04.960441   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 11:58:04.960541   85279 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 11:58:04.963080   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:58:05.055387   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:58:05.055489   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 11:58:05.143560   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 11:58:05.143663   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 11:58:05.158672   85279 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 11:58:05.158764   85279 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 11:58:05.255118   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 11:58:05.255161   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 11:58:05.450346   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:58:05.459505   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 11:58:05.459595   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 11:58:05.543719   85279 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:05.543804   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 11:58:05.566626   85279 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 11:58:05.566725   85279 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 11:58:05.762729   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 11:58:05.762780   85279 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 11:58:05.847814   85279 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.15533019s)
	I0819 11:58:05.847904   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.794152603s)
	I0819 11:58:05.847915   85279 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 11:58:05.849280   85279 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.094779949s)
	I0819 11:58:05.850276   85279 node_ready.go:35] waiting up to 6m0s for node "addons-010148" to be "Ready" ...
	I0819 11:58:05.944195   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:05.948339   85279 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:58:05.948377   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 11:58:05.963260   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 11:58:05.963309   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 11:58:06.242564   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:58:06.442710   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 11:58:06.442753   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 11:58:06.548392   85279 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-010148" context rescaled to 1 replicas
	I0819 11:58:06.845641   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:58:06.845723   85279 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 11:58:07.058722   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:58:07.865988   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:08.363167   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.11473133s)
	I0819 11:58:08.363278   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.106433769s)
	I0819 11:58:08.563204   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.219693793s)
	I0819 11:58:08.563325   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.208929254s)
	W0819 11:58:08.846823   85279 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 11:58:10.065207   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.618143016s)
	I0819 11:58:10.065370   85279 addons.go:475] Verifying addon ingress=true in "addons-010148"
	I0819 11:58:10.065459   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.302692184s)
	I0819 11:58:10.065817   85279 addons.go:475] Verifying addon registry=true in "addons-010148"
	I0819 11:58:10.065512   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.113171891s)
	I0819 11:58:10.065583   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.102414349s)
	I0819 11:58:10.066106   85279 addons.go:475] Verifying addon metrics-server=true in "addons-010148"
	I0819 11:58:10.065636   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.615196702s)
	I0819 11:58:10.066002   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.605550401s)
	I0819 11:58:10.067563   85279 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-010148 service yakd-dashboard -n yakd-dashboard
	
	I0819 11:58:10.067571   85279 out.go:177] * Verifying ingress addon...
	I0819 11:58:10.067569   85279 out.go:177] * Verifying registry addon...
	I0819 11:58:10.070102   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 11:58:10.070132   85279 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 11:58:10.146145   85279 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:58:10.146186   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.146373   85279 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 11:58:10.146395   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.353673   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:10.573689   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.574335   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.847077   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 11:58:10.847175   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:10.874939   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:10.882821   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.93850512s)
	W0819 11:58:10.882874   85279 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:58:10.882901   85279 retry.go:31] will retry after 261.657823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:58:10.882909   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.64029641s)
	I0819 11:58:11.062243   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 11:58:11.073186   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.073946   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.083051   85279 addons.go:234] Setting addon gcp-auth=true in "addons-010148"
	I0819 11:58:11.083109   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:11.083622   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:11.100216   85279 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 11:58:11.100288   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:11.117931   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:11.145054   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:11.577700   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.577978   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.519032792s)
	I0819 11:58:11.578022   85279 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-010148"
	I0819 11:58:11.578332   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.579711   85279 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 11:58:11.581555   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 11:58:11.645407   85279 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:58:11.645436   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.073710   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.074252   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.085243   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.573247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.573578   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.584687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.853168   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:13.074054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.074513   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.084331   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.574224   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.575651   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.646401   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.146385   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.148003   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.148812   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.573562   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.574229   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.585222   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.591560   85279 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.491313064s)
	I0819 11:58:14.591558   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.446450312s)
	I0819 11:58:14.593794   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:14.595133   85279 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 11:58:14.596331   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 11:58:14.596355   85279 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 11:58:14.651283   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 11:58:14.651317   85279 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 11:58:14.668580   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:58:14.668601   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 11:58:14.685265   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:58:14.854424   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:15.073334   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.074107   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.085616   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.266361   85279 addons.go:475] Verifying addon gcp-auth=true in "addons-010148"
	I0819 11:58:15.267861   85279 out.go:177] * Verifying gcp-auth addon...
	I0819 11:58:15.270081   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 11:58:15.273247   85279 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:58:15.273265   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.574527   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.574810   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.585119   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.773978   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.074218   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.075274   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.084189   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.273274   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.574073   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.574761   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.585062   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.773374   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.073931   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.074614   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.084754   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.273336   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.353765   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:17.574050   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.574448   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.584612   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.773301   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.073960   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.074439   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.084408   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.273489   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.573785   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.574310   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.584646   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.773293   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.073571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.074181   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.085141   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.274053   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.573054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.573624   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.584604   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.772932   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.853490   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:20.072997   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:20.073311   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.084628   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.273086   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.573226   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:20.573638   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.584475   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.772860   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.075029   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:21.075332   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.084237   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.273318   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.573775   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:21.574378   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.584124   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.773959   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.072966   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:22.073546   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.084269   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.273682   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.352999   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:22.573819   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:22.574505   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.584467   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.772669   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.866753   85279 node_ready.go:49] node "addons-010148" has status "Ready":"True"
	I0819 11:58:22.866777   85279 node_ready.go:38] duration metric: took 17.016472351s for node "addons-010148" to be "Ready" ...
	I0819 11:58:22.866788   85279 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:58:22.952525   85279 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:23.074159   85279 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:58:23.074188   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:23.074466   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.085436   85279 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:58:23.085458   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.274353   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.574933   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:23.575714   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.677936   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.776342   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.074282   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:24.074640   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.085380   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.273584   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.575036   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:24.575937   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.644692   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.844812   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.958558   85279 pod_ready.go:93] pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.958583   85279 pod_ready.go:82] duration metric: took 2.005961009s for pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.958610   85279 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.963913   85279 pod_ready.go:93] pod "etcd-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.963940   85279 pod_ready.go:82] duration metric: took 5.321705ms for pod "etcd-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.963964   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.968664   85279 pod_ready.go:93] pod "kube-apiserver-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.968688   85279 pod_ready.go:82] duration metric: took 4.715827ms for pod "kube-apiserver-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.968700   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.973751   85279 pod_ready.go:93] pod "kube-controller-manager-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.973817   85279 pod_ready.go:82] duration metric: took 5.10758ms for pod "kube-controller-manager-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.973866   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-94dm9" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.978074   85279 pod_ready.go:93] pod "kube-proxy-94dm9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.978092   85279 pod_ready.go:82] duration metric: took 4.217561ms for pod "kube-proxy-94dm9" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.978100   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.075373   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:25.075946   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.086554   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.273964   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.356626   85279 pod_ready.go:93] pod "kube-scheduler-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:25.356652   85279 pod_ready.go:82] duration metric: took 378.544376ms for pod "kube-scheduler-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.356665   85279 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.574003   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:25.574715   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.585448   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.772972   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.074485   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:26.074791   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.085169   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.274042   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.573859   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:26.574400   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.585683   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.774154   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.076247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:27.076618   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.085629   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.273687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.362348   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:27.573774   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:27.574127   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.585382   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.773768   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.073522   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:28.073744   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.085244   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.273675   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.574179   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:28.574327   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.585237   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.774846   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.074090   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:29.074197   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.086374   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.273717   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.362940   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:29.574904   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:29.575412   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.585815   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.773756   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.074017   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:30.074361   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.086135   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.273163   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.574558   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:30.574907   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.585533   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.773250   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.074149   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:31.074359   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.085982   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.273747   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.363556   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:31.573908   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:31.574117   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.586272   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.775979   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.074268   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:32.074444   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.148479   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.345255   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.574185   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:32.574473   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.586363   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.773386   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.074453   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:33.074628   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.085214   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.272894   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.574662   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:33.575173   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.586177   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.773579   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.863290   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:34.074084   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:34.074348   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.085011   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.273066   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.574250   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:34.574736   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.585459   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.773993   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.074370   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:35.074692   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.085013   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.273561   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.574296   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:35.574467   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.585136   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.773459   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.074319   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:36.074551   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.085553   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.274051   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.362686   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:36.573827   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:36.575025   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.585410   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.773003   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.074247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:37.074527   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.085608   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.273612   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.574018   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:37.574286   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.585633   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.773497   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.074326   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:38.074873   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.084974   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.272932   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.362854   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:38.574325   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:38.574473   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.585819   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.774712   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.073623   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:39.074240   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.086337   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.274059   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.574201   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:39.574606   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.585550   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.773641   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.074949   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:40.077199   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.085640   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.273898   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.363056   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:40.574499   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:40.574731   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.584962   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.773093   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.074071   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:41.074459   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.085075   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.274827   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.573871   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:41.574112   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.585698   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.773413   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.073823   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:42.074022   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.085519   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.273788   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.573911   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:42.574141   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.585663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.773730   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.862617   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:43.073635   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:43.074306   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.085918   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.272888   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.574217   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:43.574563   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.587193   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.773801   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.074523   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:44.074771   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.175964   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.273599   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.573978   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:44.574333   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.586476   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.773349   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.073929   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:45.074232   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.086742   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:45.273379   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.363310   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:45.574868   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:45.575475   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.585755   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:45.773769   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.075196   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.075771   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:46.086104   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:46.274113   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.574716   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:46.576222   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.586108   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:46.773784   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.074894   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:47.075293   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.085601   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:47.273748   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.574190   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:47.574751   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.585686   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:47.773319   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.861809   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:48.074222   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:48.074822   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.085796   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:48.273405   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.574373   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:48.574457   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.584792   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:48.774026   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.074171   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:49.074795   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.085338   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:49.273337   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.573974   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:49.574277   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.585784   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:49.773036   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.073975   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:50.074327   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.085255   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:50.274138   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.362548   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:50.574420   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:50.574868   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.585409   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:50.773534   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.074347   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:51.074643   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.086049   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:51.273714   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.574032   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:51.574377   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.586082   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:51.773221   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.074690   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:52.074947   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.085686   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:52.273889   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.362739   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:52.574054   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.574058   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:52.585606   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:52.773574   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.074153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:53.074479   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.084668   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:53.274033   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.573993   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:53.574399   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.585755   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:53.772727   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.074791   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:54.075304   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.085798   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:54.273749   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.363429   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:54.573655   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:54.574106   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.586460   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:54.773569   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.074948   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:55.075519   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.087071   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:55.274207   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.574220   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:55.574403   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.585174   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:55.773511   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.074429   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:56.074502   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.086066   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:56.273774   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.574272   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:56.574895   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.585703   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:56.774010   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.863670   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:57.074244   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:57.074445   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.084897   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:57.272688   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.573593   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:57.573820   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.585437   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:57.774103   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.074573   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:58.074884   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.085469   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:58.273477   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.573531   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:58.573718   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.585282   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:58.773522   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.074280   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:59.074691   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:59.085379   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:59.273609   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.362305   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:59.574795   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:59.574942   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:59.585694   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:59.773600   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.074135   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:00.074592   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:00.085548   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:00.273767   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.576763   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:00.577089   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:00.585564   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:00.773570   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.074111   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:01.074437   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:01.085039   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:01.274178   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.574335   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:01.574675   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:01.585428   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:01.773622   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.864751   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:02.075581   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:02.077776   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:02.145610   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:02.273361   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:02.574313   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:02.574489   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:02.646128   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:02.773914   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:03.146552   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:03.148093   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:03.149764   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:03.362425   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:03.648995   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:03.650577   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:03.651236   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:03.774306   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:04.074462   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:04.075028   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:04.086328   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:04.274308   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:04.362828   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:04.574238   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:04.574878   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:04.585243   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:04.773489   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:05.074980   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:05.075433   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:05.086368   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:05.273996   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:05.573788   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:05.574373   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:05.586566   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:05.773166   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.074984   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:06.075224   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:06.085583   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:06.274153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.574153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:06.574481   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:06.586388   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:06.773913   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.863282   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:07.074204   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:07.074651   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:07.085316   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:07.273407   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:07.574413   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:07.574680   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:07.585645   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:07.773360   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:08.073998   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:08.074333   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:08.086043   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:08.273304   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:08.574210   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:08.574626   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:08.585305   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:08.773621   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:09.074050   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:09.074186   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:09.086046   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:09.273042   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:09.362305   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:09.573567   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:09.573773   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:09.585879   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:09.773435   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:10.074571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:10.074912   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:10.085898   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:10.273645   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:10.573787   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:10.574154   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:10.585766   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:10.773685   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:11.074401   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:11.074929   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:11.085830   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:11.273557   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:11.362500   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:11.573663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:11.574004   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:11.585571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:11.773786   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:12.074768   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:12.075032   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:12.085541   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:12.273965   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:12.576687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:12.577745   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:12.586393   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:12.774465   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:13.074293   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:13.074812   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:13.086015   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:13.273423   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:13.362848   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:13.574508   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:13.574899   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:13.585582   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:13.773977   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:14.074058   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:14.074456   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:14.086758   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:14.273591   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:14.574278   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:14.574800   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:14.676079   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:14.773054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.074461   85279 kapi.go:107] duration metric: took 1m5.004356228s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 11:59:15.074944   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:15.085795   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:15.273533   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.574696   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:15.585178   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:15.773400   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.862354   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:16.073962   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:16.085534   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:16.274067   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:16.574575   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:16.585663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:16.774018   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.074443   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:17.084959   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:17.272919   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.574094   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:17.585702   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:17.773971   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.863412   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:18.074563   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:18.085444   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:18.273668   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:18.574645   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:18.585912   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:18.774433   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:19.075073   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:19.085538   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:19.274057   85279 kapi.go:107] duration metric: took 1m4.003971297s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 11:59:19.276134   85279 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-010148 cluster.
	I0819 11:59:19.277658   85279 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 11:59:19.343377   85279 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 11:59:19.646459   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:19.649315   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:19.875860   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:20.147006   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:20.147221   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:20.646707   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:20.650208   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:21.149789   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:21.150696   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:21.575272   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:21.586422   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:22.074445   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:22.087116   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:22.362976   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:22.574631   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:22.586465   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:23.073768   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:23.086457   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:23.574273   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:23.586712   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:24.075347   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:24.086242   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:24.363254   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:24.575305   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:24.585759   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:25.075702   85279 kapi.go:107] duration metric: took 1m15.005562553s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 11:59:25.087543   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:25.586404   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.147055   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.585775   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.862636   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:27.086673   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:27.585634   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.086365   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.586470   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.863337   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:29.086235   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:29.586565   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:30.087169   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:30.588245   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:31.085709   85279 kapi.go:107] duration metric: took 1m19.504149657s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 11:59:31.087448   85279 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, helm-tiller, metrics-server, ingress-dns, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0819 11:59:31.088598   85279 addons.go:510] duration metric: took 1m27.564093654s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher helm-tiller metrics-server ingress-dns yakd inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0819 11:59:31.362282   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:33.362500   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:35.862539   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:38.362679   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:40.362791   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:42.861878   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:44.863531   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:47.362674   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:49.863265   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:50.862432   85279 pod_ready.go:93] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"True"
	I0819 11:59:50.862455   85279 pod_ready.go:82] duration metric: took 1m25.505781989s for pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.862464   85279 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.866256   85279 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace has status "Ready":"True"
	I0819 11:59:50.866273   85279 pod_ready.go:82] duration metric: took 3.803358ms for pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.866290   85279 pod_ready.go:39] duration metric: took 1m27.999491178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:59:50.866345   85279 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:59:50.866382   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 11:59:50.866431   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 11:59:50.901427   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 11:59:50.901448   85279 cri.go:89] found id: ""
	I0819 11:59:50.901463   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 11:59:50.901521   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.904691   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 11:59:50.904752   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 11:59:50.940456   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 11:59:50.940478   85279 cri.go:89] found id: ""
	I0819 11:59:50.940486   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 11:59:50.940535   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.943807   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 11:59:50.943872   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 11:59:50.977418   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 11:59:50.977443   85279 cri.go:89] found id: ""
	I0819 11:59:50.977450   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 11:59:50.977504   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.981023   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 11:59:50.981074   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 11:59:51.013426   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 11:59:51.013446   85279 cri.go:89] found id: ""
	I0819 11:59:51.013453   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 11:59:51.013503   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.016662   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 11:59:51.016727   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 11:59:51.052911   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 11:59:51.052930   85279 cri.go:89] found id: ""
	I0819 11:59:51.052938   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 11:59:51.052998   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.056280   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 11:59:51.056356   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 11:59:51.091967   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 11:59:51.091993   85279 cri.go:89] found id: ""
	I0819 11:59:51.092003   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 11:59:51.092061   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.095684   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 11:59:51.095760   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 11:59:51.163716   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 11:59:51.163735   85279 cri.go:89] found id: ""
	I0819 11:59:51.163743   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 11:59:51.163790   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.166958   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 11:59:51.166979   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 11:59:51.230940   85279 logs.go:123] Gathering logs for container status ...
	I0819 11:59:51.230986   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:59:51.285488   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 11:59:51.285524   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 11:59:51.356531   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 11:59:51.356564   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 11:59:51.393549   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:59:51.393591   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:59:51.576925   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 11:59:51.576956   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 11:59:51.620992   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 11:59:51.621028   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 11:59:51.663801   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 11:59:51.663847   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 11:59:51.697290   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 11:59:51.697322   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 11:59:51.734603   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 11:59:51.734633   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 11:59:51.807841   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 11:59:51.807880   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 11:59:51.829390   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 11:59:51.829586   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 11:59:51.876543   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 11:59:51.876586   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:59:51.895329   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:59:51.895360   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 11:59:51.895421   85279 out.go:270] X Problems detected in kubelet:
	W0819 11:59:51.895436   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 11:59:51.895445   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 11:59:51.895457   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:59:51.895463   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:01.896246   85279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:00:01.910582   85279 api_server.go:72] duration metric: took 1m58.386144751s to wait for apiserver process to appear ...
	I0819 12:00:01.910613   85279 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:00:01.910677   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:00:01.910746   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:00:01.946693   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:01.946720   85279 cri.go:89] found id: ""
	I0819 12:00:01.946731   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 12:00:01.946797   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:01.950769   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 12:00:01.950854   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:00:01.987423   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:01.987452   85279 cri.go:89] found id: ""
	I0819 12:00:01.987464   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 12:00:01.987519   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:01.991034   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 12:00:01.991110   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:00:02.026336   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:02.026368   85279 cri.go:89] found id: ""
	I0819 12:00:02.026379   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 12:00:02.026429   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.030051   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:00:02.030117   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:00:02.065326   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:02.065348   85279 cri.go:89] found id: ""
	I0819 12:00:02.065355   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 12:00:02.065405   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.069014   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:00:02.069076   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:00:02.105104   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:02.105127   85279 cri.go:89] found id: ""
	I0819 12:00:02.105134   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 12:00:02.105184   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.108778   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:00:02.108859   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:00:02.146311   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:02.146338   85279 cri.go:89] found id: ""
	I0819 12:00:02.146349   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 12:00:02.146403   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.150168   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 12:00:02.150233   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:00:02.186023   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:02.186045   85279 cri.go:89] found id: ""
	I0819 12:00:02.186052   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 12:00:02.186102   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.189602   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 12:00:02.189629   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:00:02.210345   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:02.210521   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:02.259643   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 12:00:02.259687   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:02.301302   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 12:00:02.301343   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:02.335269   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 12:00:02.335299   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:02.393810   85279 logs.go:123] Gathering logs for container status ...
	I0819 12:00:02.393873   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:00:02.440020   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 12:00:02.440057   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:00:02.461096   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:00:02.461135   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:00:02.564652   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 12:00:02.564688   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:02.610828   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 12:00:02.610871   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:02.646438   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 12:00:02.646471   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:02.685731   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 12:00:02.685765   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:02.728312   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 12:00:02.728352   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 12:00:02.808759   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:02.808802   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:00:02.808871   85279 out.go:270] X Problems detected in kubelet:
	W0819 12:00:02.808883   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:02.808893   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:02.808906   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:02.808911   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:12.809142   85279 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 12:00:12.812957   85279 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 12:00:12.813943   85279 api_server.go:141] control plane version: v1.31.0
	I0819 12:00:12.813967   85279 api_server.go:131] duration metric: took 10.903346298s to wait for apiserver health ...
	I0819 12:00:12.813977   85279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:00:12.814006   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:00:12.814066   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:00:12.848238   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:12.848260   85279 cri.go:89] found id: ""
	I0819 12:00:12.848268   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 12:00:12.848310   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.851670   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 12:00:12.851730   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:00:12.884667   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:12.884689   85279 cri.go:89] found id: ""
	I0819 12:00:12.884697   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 12:00:12.884747   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.887886   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 12:00:12.887958   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:00:12.922235   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:12.922255   85279 cri.go:89] found id: ""
	I0819 12:00:12.922264   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 12:00:12.922321   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.925705   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:00:12.925769   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:00:12.959098   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:12.959118   85279 cri.go:89] found id: ""
	I0819 12:00:12.959125   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 12:00:12.959172   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.962537   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:00:12.962601   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:00:12.996596   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:12.996622   85279 cri.go:89] found id: ""
	I0819 12:00:12.996632   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 12:00:12.996680   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.000166   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:00:13.000227   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:00:13.032895   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:13.032917   85279 cri.go:89] found id: ""
	I0819 12:00:13.032925   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 12:00:13.032982   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.036143   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 12:00:13.036203   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:00:13.068287   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:13.068313   85279 cri.go:89] found id: ""
	I0819 12:00:13.068323   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 12:00:13.068386   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.071651   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 12:00:13.071675   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:13.111266   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 12:00:13.111309   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 12:00:13.183780   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 12:00:13.183819   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:13.228385   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 12:00:13.228412   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:13.269991   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 12:00:13.270020   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:13.304693   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 12:00:13.304725   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:13.343624   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 12:00:13.343660   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:13.376484   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 12:00:13.376512   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:13.432173   85279 logs.go:123] Gathering logs for container status ...
	I0819 12:00:13.432286   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:00:13.472949   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 12:00:13.472977   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:00:13.494713   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:13.494893   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:13.543564   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 12:00:13.543602   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:00:13.562749   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:00:13.562780   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:00:13.658386   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:13.658410   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:00:13.658474   85279 out.go:270] X Problems detected in kubelet:
	W0819 12:00:13.658487   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:13.658493   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:13.658501   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:13.658506   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:23.669062   85279 system_pods.go:59] 19 kube-system pods found
	I0819 12:00:23.669125   85279 system_pods.go:61] "coredns-6f6b679f8f-7mkcm" [c6c9f0bb-626f-4b5c-addb-605b703dad1a] Running
	I0819 12:00:23.669140   85279 system_pods.go:61] "csi-hostpath-attacher-0" [d3bb6a3e-0662-420b-b481-2520d71bb56a] Running
	I0819 12:00:23.669145   85279 system_pods.go:61] "csi-hostpath-resizer-0" [325c3846-5ce5-492a-b33a-662b8e3786c1] Running
	I0819 12:00:23.669151   85279 system_pods.go:61] "csi-hostpathplugin-2s76k" [0d7cc92a-db70-4d11-b4f3-7c4990113f97] Running
	I0819 12:00:23.669158   85279 system_pods.go:61] "etcd-addons-010148" [4c3e9bff-8d94-4b44-9b21-9d4208060167] Running
	I0819 12:00:23.669164   85279 system_pods.go:61] "kindnet-cppjb" [367f146a-254f-4dc3-b429-a96edfbe5d80] Running
	I0819 12:00:23.669170   85279 system_pods.go:61] "kube-apiserver-addons-010148" [6775f09c-82f3-4484-966c-539cbf577402] Running
	I0819 12:00:23.669179   85279 system_pods.go:61] "kube-controller-manager-addons-010148" [822cea42-6184-4359-bba1-6b01a6745253] Running
	I0819 12:00:23.669195   85279 system_pods.go:61] "kube-ingress-dns-minikube" [cd2c0881-7db8-4d07-9af4-29b0e4c51dfb] Running
	I0819 12:00:23.669200   85279 system_pods.go:61] "kube-proxy-94dm9" [debbf67c-381d-45ff-942c-c66366a93408] Running
	I0819 12:00:23.669205   85279 system_pods.go:61] "kube-scheduler-addons-010148" [ed387e21-f76e-45aa-a736-b721b15f1913] Running
	I0819 12:00:23.669212   85279 system_pods.go:61] "metrics-server-8988944d9-phfcl" [82ed99b0-3ee4-42b7-9afc-f26a47b0d057] Running
	I0819 12:00:23.669220   85279 system_pods.go:61] "nvidia-device-plugin-daemonset-9gfqj" [780617de-6822-48b4-bc3f-20932c2c5681] Running
	I0819 12:00:23.669226   85279 system_pods.go:61] "registry-6fb4cdfc84-vzmzk" [f04fc68c-2fa9-46e6-a18d-49a1a8a81968] Running
	I0819 12:00:23.669235   85279 system_pods.go:61] "registry-proxy-zddbz" [59ab7eba-4de5-4dd0-b7df-ee19cd688277] Running
	I0819 12:00:23.669242   85279 system_pods.go:61] "snapshot-controller-56fcc65765-nm2ls" [ce8958b5-a572-45b2-9873-0162c21c0841] Running
	I0819 12:00:23.669250   85279 system_pods.go:61] "snapshot-controller-56fcc65765-wm5wz" [e3fcb584-3ae8-4204-a72e-c4eeae36b98a] Running
	I0819 12:00:23.669255   85279 system_pods.go:61] "storage-provisioner" [5915f065-bf02-4049-9370-4c383eeceabb] Running
	I0819 12:00:23.669261   85279 system_pods.go:61] "tiller-deploy-b48cc5f79-99f2d" [a79cfc7e-dad8-4740-8386-760769073d6b] Running
	I0819 12:00:23.669271   85279 system_pods.go:74] duration metric: took 10.855286346s to wait for pod list to return data ...
	I0819 12:00:23.669283   85279 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:00:23.672036   85279 default_sa.go:45] found service account: "default"
	I0819 12:00:23.672060   85279 default_sa.go:55] duration metric: took 2.768504ms for default service account to be created ...
	I0819 12:00:23.672069   85279 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:00:23.680136   85279 system_pods.go:86] 19 kube-system pods found
	I0819 12:00:23.680163   85279 system_pods.go:89] "coredns-6f6b679f8f-7mkcm" [c6c9f0bb-626f-4b5c-addb-605b703dad1a] Running
	I0819 12:00:23.680170   85279 system_pods.go:89] "csi-hostpath-attacher-0" [d3bb6a3e-0662-420b-b481-2520d71bb56a] Running
	I0819 12:00:23.680174   85279 system_pods.go:89] "csi-hostpath-resizer-0" [325c3846-5ce5-492a-b33a-662b8e3786c1] Running
	I0819 12:00:23.680177   85279 system_pods.go:89] "csi-hostpathplugin-2s76k" [0d7cc92a-db70-4d11-b4f3-7c4990113f97] Running
	I0819 12:00:23.680181   85279 system_pods.go:89] "etcd-addons-010148" [4c3e9bff-8d94-4b44-9b21-9d4208060167] Running
	I0819 12:00:23.680188   85279 system_pods.go:89] "kindnet-cppjb" [367f146a-254f-4dc3-b429-a96edfbe5d80] Running
	I0819 12:00:23.680191   85279 system_pods.go:89] "kube-apiserver-addons-010148" [6775f09c-82f3-4484-966c-539cbf577402] Running
	I0819 12:00:23.680195   85279 system_pods.go:89] "kube-controller-manager-addons-010148" [822cea42-6184-4359-bba1-6b01a6745253] Running
	I0819 12:00:23.680200   85279 system_pods.go:89] "kube-ingress-dns-minikube" [cd2c0881-7db8-4d07-9af4-29b0e4c51dfb] Running
	I0819 12:00:23.680203   85279 system_pods.go:89] "kube-proxy-94dm9" [debbf67c-381d-45ff-942c-c66366a93408] Running
	I0819 12:00:23.680206   85279 system_pods.go:89] "kube-scheduler-addons-010148" [ed387e21-f76e-45aa-a736-b721b15f1913] Running
	I0819 12:00:23.680210   85279 system_pods.go:89] "metrics-server-8988944d9-phfcl" [82ed99b0-3ee4-42b7-9afc-f26a47b0d057] Running
	I0819 12:00:23.680213   85279 system_pods.go:89] "nvidia-device-plugin-daemonset-9gfqj" [780617de-6822-48b4-bc3f-20932c2c5681] Running
	I0819 12:00:23.680217   85279 system_pods.go:89] "registry-6fb4cdfc84-vzmzk" [f04fc68c-2fa9-46e6-a18d-49a1a8a81968] Running
	I0819 12:00:23.680219   85279 system_pods.go:89] "registry-proxy-zddbz" [59ab7eba-4de5-4dd0-b7df-ee19cd688277] Running
	I0819 12:00:23.680223   85279 system_pods.go:89] "snapshot-controller-56fcc65765-nm2ls" [ce8958b5-a572-45b2-9873-0162c21c0841] Running
	I0819 12:00:23.680226   85279 system_pods.go:89] "snapshot-controller-56fcc65765-wm5wz" [e3fcb584-3ae8-4204-a72e-c4eeae36b98a] Running
	I0819 12:00:23.680228   85279 system_pods.go:89] "storage-provisioner" [5915f065-bf02-4049-9370-4c383eeceabb] Running
	I0819 12:00:23.680231   85279 system_pods.go:89] "tiller-deploy-b48cc5f79-99f2d" [a79cfc7e-dad8-4740-8386-760769073d6b] Running
	I0819 12:00:23.680238   85279 system_pods.go:126] duration metric: took 8.162576ms to wait for k8s-apps to be running ...
	I0819 12:00:23.680247   85279 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:00:23.680291   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:00:23.691554   85279 system_svc.go:56] duration metric: took 11.29787ms WaitForService to wait for kubelet
	I0819 12:00:23.691585   85279 kubeadm.go:582] duration metric: took 2m20.167153014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:00:23.691605   85279 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:00:23.694403   85279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 12:00:23.694429   85279 node_conditions.go:123] node cpu capacity is 8
	I0819 12:00:23.694445   85279 node_conditions.go:105] duration metric: took 2.834599ms to run NodePressure ...
	I0819 12:00:23.694459   85279 start.go:241] waiting for startup goroutines ...
	I0819 12:00:23.694469   85279 start.go:246] waiting for cluster config update ...
	I0819 12:00:23.694491   85279 start.go:255] writing updated cluster config ...
	I0819 12:00:23.694772   85279 ssh_runner.go:195] Run: rm -f paused
	I0819 12:00:23.742573   85279 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 12:00:23.744897   85279 out.go:177] * Done! kubectl is now configured to use "addons-010148" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.417942757Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-4vzf7 Namespace:ingress-nginx ID:6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83 UID:dbdd9f93-225f-497c-a174-9b777086b278 NetNS:/var/run/netns/05bb9ce0-4fbc-40f3-8ec4-3352dcbcde3d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.418057867Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-4vzf7 from CNI network \"kindnet\" (type=ptp)"
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.447386182Z" level=info msg="Stopped pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=667cde57-5d51-44df-82d6-072d41d2817f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.667511318Z" level=info msg="Removing container: 7b2e93cd36916ab98c9b24200e23344ef022e118e64da34ed02eb8a2d6dea3d2" id=a512aca7-5832-4244-b70f-9e72db71a297 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.680361242Z" level=info msg="Removed container 7b2e93cd36916ab98c9b24200e23344ef022e118e64da34ed02eb8a2d6dea3d2: ingress-nginx/ingress-nginx-admission-create-r6r8n/create" id=a512aca7-5832-4244-b70f-9e72db71a297 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.681563869Z" level=info msg="Removing container: 2ce00e9767deb079c54c8ae563bc35c8ecc48082e9c93bb2be9a4664f4b91087" id=ffb96613-106e-4d15-bc52-8b7b50d3a602 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.695085066Z" level=info msg="Removed container 2ce00e9767deb079c54c8ae563bc35c8ecc48082e9c93bb2be9a4664f4b91087: ingress-nginx/ingress-nginx-controller-bc57996ff-4vzf7/controller" id=ffb96613-106e-4d15-bc52-8b7b50d3a602 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.696376342Z" level=info msg="Removing container: 0128c03f68235eaf634d3cd838682f3f4b800669a1efbd4fbe48c647d0880309" id=531ad93a-187b-4857-8428-107f6751c103 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.709624808Z" level=info msg="Removed container 0128c03f68235eaf634d3cd838682f3f4b800669a1efbd4fbe48c647d0880309: ingress-nginx/ingress-nginx-admission-patch-dngcz/patch" id=531ad93a-187b-4857-8428-107f6751c103 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713319270Z" level=info msg="Stopping pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=bfdf6c63-1352-4c83-8ba5-01192a1a9370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713375338Z" level=info msg="Stopped pod sandbox (already stopped): 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=bfdf6c63-1352-4c83-8ba5-01192a1a9370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713674752Z" level=info msg="Removing pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=a01d67da-a9d8-4a34-86c3-2cf2dd80f2c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.719599731Z" level=info msg="Removed pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=a01d67da-a9d8-4a34-86c3-2cf2dd80f2c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720161839Z" level=info msg="Stopping pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=f9c3d094-3f07-45a7-980f-89158dbe10c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720194333Z" level=info msg="Stopped pod sandbox (already stopped): 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=f9c3d094-3f07-45a7-980f-89158dbe10c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720575756Z" level=info msg="Removing pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=61d3b581-6215-4d64-b71d-fe02ac96aa5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727010770Z" level=info msg="Removed pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=61d3b581-6215-4d64-b71d-fe02ac96aa5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727490041Z" level=info msg="Stopping pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=bc9dd9d7-23ec-4fb1-ad27-78130cd63c6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727529266Z" level=info msg="Stopped pod sandbox (already stopped): 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=bc9dd9d7-23ec-4fb1-ad27-78130cd63c6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727825738Z" level=info msg="Removing pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=d08bccd6-38f9-458f-aac5-1e766306a545 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733359387Z" level=info msg="Removed pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=d08bccd6-38f9-458f-aac5-1e766306a545 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733788329Z" level=info msg="Stopping pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=5487bab7-ba72-449e-909a-dcf52e833488 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733823762Z" level=info msg="Stopped pod sandbox (already stopped): d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=5487bab7-ba72-449e-909a-dcf52e833488 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.734141636Z" level=info msg="Removing pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=e87d7e1f-0b63-4253-89cd-98fc4fe148fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.740399266Z" level=info msg="Removed pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=e87d7e1f-0b63-4253-89cd-98fc4fe148fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29b5139aeabec       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   6 seconds ago       Running             hello-world-app           0                   1adff60419d8d       hello-world-app-55bf9c44b4-qjzs2
	c550b65af1c54       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         2 minutes ago       Running             nginx                     0                   faa36f40b0de4       nginx
	cf4c86f2ad830       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   cff6eb25eb7b9       busybox
	055544534d3c6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   16438c4c6da49       metrics-server-8988944d9-phfcl
	1ba40d35141d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        5 minutes ago       Running             coredns                   0                   3925d9b1ded78       coredns-6f6b679f8f-7mkcm
	593cfb1da27e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        5 minutes ago       Running             storage-provisioner       0                   f0aa565db5dce       storage-provisioner
	f31388602abfe       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      5 minutes ago       Running             kindnet-cni               0                   f899e31c46b0c       kindnet-cppjb
	7576faede8f13       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        5 minutes ago       Running             kube-proxy                0                   b3dfdd2e1d8d3       kube-proxy-94dm9
	7fbf8e09fb2a0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        6 minutes ago       Running             kube-scheduler            0                   052e31579ba74       kube-scheduler-addons-010148
	e17cd1075970d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        6 minutes ago       Running             etcd                      0                   833d2f75bc1ff       etcd-addons-010148
	8061bb277832a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        6 minutes ago       Running             kube-controller-manager   0                   d3c6a9c216488       kube-controller-manager-addons-010148
	36d77af4416f3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        6 minutes ago       Running             kube-apiserver            0                   c7613a422ae4b       kube-apiserver-addons-010148
	
	
	==> coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] <==
	[INFO] 10.244.0.19:42174 - 33091 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009759s
	[INFO] 10.244.0.19:47604 - 29219 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00397029s
	[INFO] 10.244.0.19:47604 - 7713 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004036227s
	[INFO] 10.244.0.19:44346 - 3410 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003906896s
	[INFO] 10.244.0.19:44346 - 43095 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003974597s
	[INFO] 10.244.0.19:53911 - 15066 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003823351s
	[INFO] 10.244.0.19:53911 - 30681 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.008608898s
	[INFO] 10.244.0.19:40577 - 36299 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006749s
	[INFO] 10.244.0.19:40577 - 63182 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073739s
	[INFO] 10.244.0.20:36919 - 25940 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193953s
	[INFO] 10.244.0.20:39864 - 39507 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001858s
	[INFO] 10.244.0.20:45599 - 42134 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133358s
	[INFO] 10.244.0.20:53451 - 55020 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011509s
	[INFO] 10.244.0.20:34568 - 33422 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121545s
	[INFO] 10.244.0.20:46878 - 7425 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137153s
	[INFO] 10.244.0.20:36307 - 17801 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004680388s
	[INFO] 10.244.0.20:40391 - 55865 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004741399s
	[INFO] 10.244.0.20:57021 - 22476 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004192647s
	[INFO] 10.244.0.20:46334 - 5438 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00466871s
	[INFO] 10.244.0.20:39512 - 37463 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004172581s
	[INFO] 10.244.0.20:38926 - 44539 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004328609s
	[INFO] 10.244.0.20:60058 - 9234 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000758534s
	[INFO] 10.244.0.20:46975 - 12242 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000828275s
	[INFO] 10.244.0.23:59481 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00017755s
	[INFO] 10.244.0.23:50784 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173374s
	
	
	==> describe nodes <==
	Name:               addons-010148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-010148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=addons-010148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_57_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-010148
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:57:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-010148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:03:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:02:03 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:02:03 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:02:03 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:02:03 +0000   Mon, 19 Aug 2024 11:58:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-010148
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 5536b84bdec745ff98ea72a7ce81abf4
	  System UUID:                a7c6b126-5a64-4229-83f1-4ce38b7718a7
	  Boot ID:                    27d0ea76-89fe-494c-b831-ffe5c08f219c
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  default                     hello-world-app-55bf9c44b4-qjzs2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 coredns-6f6b679f8f-7mkcm                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m
	  kube-system                 etcd-addons-010148                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m5s
	  kube-system                 kindnet-cppjb                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m
	  kube-system                 kube-apiserver-addons-010148             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-controller-manager-addons-010148    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-proxy-94dm9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-scheduler-addons-010148             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-8988944d9-phfcl           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m55s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m54s  kube-proxy       
	  Normal   Starting                 6m5s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m5s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m5s   kubelet          Node addons-010148 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s   kubelet          Node addons-010148 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s   kubelet          Node addons-010148 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m1s   node-controller  Node addons-010148 event: Registered Node addons-010148 in Controller
	  Normal   NodeReady                5m41s  kubelet          Node addons-010148 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.001189] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000003] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.507412] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000006] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.000444] IPv4: martian source 192.168.122.1 from 10.244.0.2, on dev virbr0
	[  +0.000001] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +1.500650] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000006] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.001146] IPv4: martian source 192.168.122.1 from 10.244.0.2, on dev virbr0
	[  +0.000003] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[Aug19 12:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +1.031417] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +2.015773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +4.191588] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +8.191150] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[Aug19 12:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[ +33.788481] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	
	
	==> etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] <==
	{"level":"info","ts":"2024-08-19T11:57:54.353584Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:57:54.353689Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:57:54.354544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:57:54.354706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-19T11:58:05.456954Z","caller":"traceutil/trace.go:171","msg":"trace[525934890] linearizableReadLoop","detail":"{readStateIndex:365; appliedIndex:364; }","duration":"104.54643ms","start":"2024-08-19T11:58:05.352386Z","end":"2024-08-19T11:58:05.456933Z","steps":["trace[525934890] 'read index received'  (duration: 100.346744ms)","trace[525934890] 'applied index is now lower than readState.Index'  (duration: 4.198704ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:58:05.457119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.700886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T11:58:05.457170Z","caller":"traceutil/trace.go:171","msg":"trace[1781395707] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:353; }","duration":"104.779283ms","start":"2024-08-19T11:58:05.352381Z","end":"2024-08-19T11:58:05.457160Z","steps":["trace[1781395707] 'agreement among raft nodes before linearized reading'  (duration: 104.648098ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:05.457399Z","caller":"traceutil/trace.go:171","msg":"trace[434263034] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"210.662075ms","start":"2024-08-19T11:58:05.246728Z","end":"2024-08-19T11:58:05.457390Z","steps":["trace[434263034] 'process raft request'  (duration: 207.488696ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.146316Z","caller":"traceutil/trace.go:171","msg":"trace[1129713747] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"100.216897ms","start":"2024-08-19T11:58:06.046081Z","end":"2024-08-19T11:58:06.146298Z","steps":["trace[1129713747] 'process raft request'  (duration: 99.878221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:06.150087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.99354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:06.159079Z","caller":"traceutil/trace.go:171","msg":"trace[37306806] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:370; }","duration":"111.99514ms","start":"2024-08-19T11:58:06.047067Z","end":"2024-08-19T11:58:06.159062Z","steps":["trace[37306806] 'agreement among raft nodes before linearized reading'  (duration: 102.979958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.150131Z","caller":"traceutil/trace.go:171","msg":"trace[1302662073] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"103.80255ms","start":"2024-08-19T11:58:06.046314Z","end":"2024-08-19T11:58:06.150117Z","steps":["trace[1302662073] 'process raft request'  (duration: 99.720069ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.150230Z","caller":"traceutil/trace.go:171","msg":"trace[1694963596] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"103.670164ms","start":"2024-08-19T11:58:06.046551Z","end":"2024-08-19T11:58:06.150221Z","steps":["trace[1694963596] 'process raft request'  (duration: 99.517575ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.242822Z","caller":"traceutil/trace.go:171","msg":"trace[2014269219] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"182.940392ms","start":"2024-08-19T11:58:07.059849Z","end":"2024-08-19T11:58:07.242789Z","steps":["trace[2014269219] 'process raft request'  (duration: 100.1189ms)","trace[2014269219] 'compare'  (duration: 82.261843ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:58:07.244685Z","caller":"traceutil/trace.go:171","msg":"trace[120059853] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"184.708768ms","start":"2024-08-19T11:58:07.059940Z","end":"2024-08-19T11:58:07.244648Z","steps":["trace[120059853] 'read index received'  (duration: 87.547766ms)","trace[120059853] 'applied index is now lower than readState.Index'  (duration: 97.160117ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:58:07.245277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.320612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-addons-010148\" ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2024-08-19T11:58:07.245318Z","caller":"traceutil/trace.go:171","msg":"trace[519150204] range","detail":"{range_begin:/registry/pods/kube-system/etcd-addons-010148; range_end:; response_count:1; response_revision:398; }","duration":"185.372377ms","start":"2024-08-19T11:58:07.059937Z","end":"2024-08-19T11:58:07.245310Z","steps":["trace[519150204] 'agreement among raft nodes before linearized reading'  (duration: 185.297954ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.765989Z","caller":"traceutil/trace.go:171","msg":"trace[154973813] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"107.841159ms","start":"2024-08-19T11:58:07.658137Z","end":"2024-08-19T11:58:07.765978Z","steps":["trace[154973813] 'process raft request'  (duration: 107.753325ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845270Z","caller":"traceutil/trace.go:171","msg":"trace[210933175] linearizableReadLoop","detail":"{readStateIndex:450; appliedIndex:446; }","duration":"182.528486ms","start":"2024-08-19T11:58:07.662725Z","end":"2024-08-19T11:58:07.845254Z","steps":["trace[210933175] 'read index received'  (duration: 179.221846ms)","trace[210933175] 'applied index is now lower than readState.Index'  (duration: 3.305879ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:58:07.845516Z","caller":"traceutil/trace.go:171","msg":"trace[75732263] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"184.985681ms","start":"2024-08-19T11:58:07.660502Z","end":"2024-08-19T11:58:07.845488Z","steps":["trace[75732263] 'process raft request'  (duration: 184.476453ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845755Z","caller":"traceutil/trace.go:171","msg":"trace[1602890120] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"183.39727ms","start":"2024-08-19T11:58:07.662347Z","end":"2024-08-19T11:58:07.845745Z","steps":["trace[1602890120] 'process raft request'  (duration: 182.734198ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845985Z","caller":"traceutil/trace.go:171","msg":"trace[1254138284] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"183.538299ms","start":"2024-08-19T11:58:07.662434Z","end":"2024-08-19T11:58:07.845972Z","steps":["trace[1254138284] 'process raft request'  (duration: 182.690122ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.846157Z","caller":"traceutil/trace.go:171","msg":"trace[1200692081] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"183.54906ms","start":"2024-08-19T11:58:07.662598Z","end":"2024-08-19T11:58:07.846147Z","steps":["trace[1200692081] 'process raft request'  (duration: 182.554432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:07.846601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.860566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:07.846632Z","caller":"traceutil/trace.go:171","msg":"trace[571859508] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:438; }","duration":"183.901849ms","start":"2024-08-19T11:58:07.662722Z","end":"2024-08-19T11:58:07.846624Z","steps":["trace[571859508] 'agreement among raft nodes before linearized reading'  (duration: 183.841931ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:04:03 up  1:45,  0 users,  load average: 0.28, 0.97, 1.48
	Linux addons-010148 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] <==
	I0819 12:02:42.443232       1 main.go:299] handling current node
	I0819 12:02:52.443244       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:02:52.443277       1 main.go:299] handling current node
	W0819 12:03:00.661957       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:03:00.661998       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 12:03:02.443546       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:02.443589       1 main.go:299] handling current node
	I0819 12:03:12.443221       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:12.443268       1 main.go:299] handling current node
	W0819 12:03:21.682674       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 12:03:21.682715       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 12:03:22.442787       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:22.442826       1 main.go:299] handling current node
	W0819 12:03:31.997701       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:03:31.997740       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:03:32.443000       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:32.443036       1 main.go:299] handling current node
	W0819 12:03:34.146908       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:03:34.146949       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 12:03:42.442761       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:42.442798       1 main.go:299] handling current node
	I0819 12:03:52.443704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:03:52.443741       1 main.go:299] handling current node
	I0819 12:04:02.443299       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:04:02.443345       1 main.go:299] handling current node
	
	
	==> kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] <==
	E0819 11:59:50.487844       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0819 12:00:34.170733       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34700: use of closed network connection
	E0819 12:00:34.329454       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34724: use of closed network connection
	I0819 12:00:49.028755       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 12:00:50.045103       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 12:01:08.938320       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 12:01:11.642736       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.123.28"}
	I0819 12:01:29.603327       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 12:01:29.956443       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.202.194"}
	I0819 12:01:36.171830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.171995       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.246572       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.246650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.252809       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.252939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.259396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.259533       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.269173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.269310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 12:01:37.253483       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 12:01:37.269267       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 12:01:37.279122       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0819 12:01:38.816376       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 12:01:45.213091       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.31:52722: read: connection reset by peer
	I0819 12:03:53.450347       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.89.6"}
	
	
	==> kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] <==
	W0819 12:02:20.370544       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:20.370592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:02:35.549161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:35.549214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:02:52.562170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:52.562214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:00.532542       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:00.532588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:07.554657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:07.554697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:31.069554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:31.069611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:35.328764       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:35.328806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:51.767777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:51.767846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 12:03:53.223214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.307648ms"
	I0819 12:03:53.228258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.992051ms"
	I0819 12:03:53.228351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.444µs"
	I0819 12:03:53.234433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.377µs"
	I0819 12:03:55.261562       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 12:03:55.263077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.402µs"
	I0819 12:03:55.265608       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0819 12:03:57.769963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.86835ms"
	I0819 12:03:57.770086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.529µs"
	
	
	==> kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] <==
	I0819 11:58:07.156787       1 server_linux.go:66] "Using iptables proxy"
	I0819 11:58:08.048066       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 11:58:08.052375       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:58:08.650820       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 11:58:08.650978       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:58:08.654724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:58:08.655576       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:58:08.656147       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:58:08.657731       1 config.go:197] "Starting service config controller"
	I0819 11:58:08.659399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:58:08.659091       1 config.go:326] "Starting node config controller"
	I0819 11:58:08.659524       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:58:08.658449       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:58:08.659542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:58:08.760197       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:58:08.760265       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:58:08.842496       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] <==
	W0819 11:57:55.954627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0819 11:57:55.954680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0819 11:57:55.954716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:57:55.954722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 11:57:55.954753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 11:57:55.954768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0819 11:57:55.954694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:57:55.955045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:55.955078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:55.955101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:57:55.955125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.759311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:57:56.759352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.819989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:56.820039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.868798       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:57:56.868837       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 11:57:56.870621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:57:56.870661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 11:57:58.651702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:03:53 addons-010148 kubelet[1622]: I0819 12:03:53.224356    1622 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0829558-4841-446d-9e7e-a55261892fe3" containerName="helm-test"
	Aug 19 12:03:53 addons-010148 kubelet[1622]: I0819 12:03:53.352475    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmm4\" (UniqueName: \"kubernetes.io/projected/f402914f-e893-45c1-8033-afa315d0178d-kube-api-access-8xmm4\") pod \"hello-world-app-55bf9c44b4-qjzs2\" (UID: \"f402914f-e893-45c1-8033-afa315d0178d\") " pod="default/hello-world-app-55bf9c44b4-qjzs2"
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.359659    1622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swgg\" (UniqueName: \"kubernetes.io/projected/cd2c0881-7db8-4d07-9af4-29b0e4c51dfb-kube-api-access-7swgg\") pod \"cd2c0881-7db8-4d07-9af4-29b0e4c51dfb\" (UID: \"cd2c0881-7db8-4d07-9af4-29b0e4c51dfb\") "
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.361542    1622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2c0881-7db8-4d07-9af4-29b0e4c51dfb-kube-api-access-7swgg" (OuterVolumeSpecName: "kube-api-access-7swgg") pod "cd2c0881-7db8-4d07-9af4-29b0e4c51dfb" (UID: "cd2c0881-7db8-4d07-9af4-29b0e4c51dfb"). InnerVolumeSpecName "kube-api-access-7swgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.460893    1622 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7swgg\" (UniqueName: \"kubernetes.io/projected/cd2c0881-7db8-4d07-9af4-29b0e4c51dfb-kube-api-access-7swgg\") on node \"addons-010148\" DevicePath \"\""
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.748729    1622 scope.go:117] "RemoveContainer" containerID="ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37"
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.765669    1622 scope.go:117] "RemoveContainer" containerID="ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37"
	Aug 19 12:03:54 addons-010148 kubelet[1622]: E0819 12:03:54.766075    1622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37\": container with ID starting with ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37 not found: ID does not exist" containerID="ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37"
	Aug 19 12:03:54 addons-010148 kubelet[1622]: I0819 12:03:54.766117    1622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37"} err="failed to get container status \"ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37\": rpc error: code = NotFound desc = could not find container \"ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37\": container with ID starting with ebbc3e53cda7690acb6cfd618f1ed4c66dccaf7d249ca9e91ff6147296662f37 not found: ID does not exist"
	Aug 19 12:03:56 addons-010148 kubelet[1622]: I0819 12:03:56.448625    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9bb922-d542-4708-9246-d07613819c6f" path="/var/lib/kubelet/pods/5e9bb922-d542-4708-9246-d07613819c6f/volumes"
	Aug 19 12:03:56 addons-010148 kubelet[1622]: I0819 12:03:56.448988    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac5f1ab-0bea-40cd-9f7f-128a514e5b76" path="/var/lib/kubelet/pods/8ac5f1ab-0bea-40cd-9f7f-128a514e5b76/volumes"
	Aug 19 12:03:56 addons-010148 kubelet[1622]: I0819 12:03:56.449291    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd2c0881-7db8-4d07-9af4-29b0e4c51dfb" path="/var/lib/kubelet/pods/cd2c0881-7db8-4d07-9af4-29b0e4c51dfb/volumes"
	Aug 19 12:03:57 addons-010148 kubelet[1622]: I0819 12:03:57.765424    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-qjzs2" podStartSLOduration=1.606244073 podStartE2EDuration="4.765397245s" podCreationTimestamp="2024-08-19 12:03:53 +0000 UTC" firstStartedPulling="2024-08-19 12:03:53.581582456 +0000 UTC m=+355.252172186" lastFinishedPulling="2024-08-19 12:03:56.740735621 +0000 UTC m=+358.411325358" observedRunningTime="2024-08-19 12:03:57.764990671 +0000 UTC m=+359.435580418" watchObservedRunningTime="2024-08-19 12:03:57.765397245 +0000 UTC m=+359.435986989"
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.586061    1622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbk7g\" (UniqueName: \"kubernetes.io/projected/dbdd9f93-225f-497c-a174-9b777086b278-kube-api-access-qbk7g\") pod \"dbdd9f93-225f-497c-a174-9b777086b278\" (UID: \"dbdd9f93-225f-497c-a174-9b777086b278\") "
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.586117    1622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbdd9f93-225f-497c-a174-9b777086b278-webhook-cert\") pod \"dbdd9f93-225f-497c-a174-9b777086b278\" (UID: \"dbdd9f93-225f-497c-a174-9b777086b278\") "
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.587932    1622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdd9f93-225f-497c-a174-9b777086b278-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dbdd9f93-225f-497c-a174-9b777086b278" (UID: "dbdd9f93-225f-497c-a174-9b777086b278"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.587954    1622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdd9f93-225f-497c-a174-9b777086b278-kube-api-access-qbk7g" (OuterVolumeSpecName: "kube-api-access-qbk7g") pod "dbdd9f93-225f-497c-a174-9b777086b278" (UID: "dbdd9f93-225f-497c-a174-9b777086b278"). InnerVolumeSpecName "kube-api-access-qbk7g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:03:58 addons-010148 kubelet[1622]: E0819 12:03:58.618452    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069038618279411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:03:58 addons-010148 kubelet[1622]: E0819 12:03:58.618485    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069038618279411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.666473    1622 scope.go:117] "RemoveContainer" containerID="7b2e93cd36916ab98c9b24200e23344ef022e118e64da34ed02eb8a2d6dea3d2"
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.680575    1622 scope.go:117] "RemoveContainer" containerID="2ce00e9767deb079c54c8ae563bc35c8ecc48082e9c93bb2be9a4664f4b91087"
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.686452    1622 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qbk7g\" (UniqueName: \"kubernetes.io/projected/dbdd9f93-225f-497c-a174-9b777086b278-kube-api-access-qbk7g\") on node \"addons-010148\" DevicePath \"\""
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.686488    1622 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbdd9f93-225f-497c-a174-9b777086b278-webhook-cert\") on node \"addons-010148\" DevicePath \"\""
	Aug 19 12:03:58 addons-010148 kubelet[1622]: I0819 12:03:58.695386    1622 scope.go:117] "RemoveContainer" containerID="0128c03f68235eaf634d3cd838682f3f4b800669a1efbd4fbe48c647d0880309"
	Aug 19 12:04:00 addons-010148 kubelet[1622]: I0819 12:04:00.448684    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbdd9f93-225f-497c-a174-9b777086b278" path="/var/lib/kubelet/pods/dbdd9f93-225f-497c-a174-9b777086b278/volumes"
	
	
	==> storage-provisioner [593cfb1da27e071e2e5d1783f8bbdf03aefb01cf8585c94a1960b24d26abc516] <==
	I0819 11:58:23.675626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:58:23.683082       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:58:23.683140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:58:23.691567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:58:23.691612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c89b6e88-c331-4e5a-b646-bf95c466c783", APIVersion:"v1", ResourceVersion:"906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5 became leader
	I0819 11:58:23.691765       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5!
	I0819 11:58:23.791911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-010148 -n addons-010148
helpers_test.go:261: (dbg) Run:  kubectl --context addons-010148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (318.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.363088ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-phfcl" [82ed99b0-3ee4-42b7-9afc-f26a47b0d057] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002786838s
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (71.697834ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 2m45.652321376s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (64.957255ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 2m47.623602243s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (70.382862ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 2m50.676821378s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (71.02841ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 2m59.852217365s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (88.143264ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 3m7.957787213s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (63.034231ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 3m29.321428288s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (63.178333ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 3m55.938924408s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (64.944576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 4m35.780400578s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (61.58232ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 5m32.679436142s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (61.719428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 6m38.870891519s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (61.673365ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 7m13.509521217s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-010148 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-010148 top pods -n kube-system: exit status 1 (62.544984ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7mkcm, age: 7m55.82243537s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-010148
helpers_test.go:235: (dbg) docker inspect addons-010148:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd",
	        "Created": "2024-08-19T11:57:43.276181556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 86015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T11:57:43.400361499Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/hosts",
	        "LogPath": "/var/lib/docker/containers/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd/0ade25f8970db790384c9e6218706172b64b7d67e0d579aa193d87af2e1658cd-json.log",
	        "Name": "/addons-010148",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-010148:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-010148",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4-init/diff:/var/lib/docker/overlay2/3c736a112b0015011dd3f0c044c902fbcf6dfb1fd861cd8c6e5619934cdeaf76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc4cb24e43a928762240c9acca974c6a3742c228dea0cc407e1cbcd11667f3c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-010148",
	                "Source": "/var/lib/docker/volumes/addons-010148/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-010148",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-010148",
	                "name.minikube.sigs.k8s.io": "addons-010148",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54bd52e3edcad6d1addd31a7129b5043f9056e4a167e024fd3973abd56f95696",
	            "SandboxKey": "/var/run/docker/netns/54bd52e3edca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-010148": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22d92805fbe4e1d7aab9b57cc9bfee25f02e9c623b5b865a3e3b744ff69af499",
	                    "EndpointID": "3f0b382f618c6cba716c611d44b75bfda8e6e3022bd80283ad2ad3665ea0e745",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-010148",
	                        "0ade25f8970d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-010148 -n addons-010148
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 logs -n 25: (1.122109539s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-335603                                                                   | download-docker-335603 | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-139959   | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | binary-mirror-139959                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45177                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-139959                                                                     | binary-mirror-139959   | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-010148 --wait=true                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 12:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| ip      | addons-010148 ip                                                                            | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | -p addons-010148                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | addons-010148                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | -p addons-010148                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-010148 ssh cat                                                                       | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | /opt/local-path-provisioner/pvc-520035d6-e6c6-424a-94a4-de8464c48f46_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-010148 addons                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-010148 ssh curl -s                                                                   | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC | 19 Aug 24 12:01 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-010148 ip                                                                            | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:03 UTC |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-010148 addons disable                                                                | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-010148 addons                                                                        | addons-010148          | jenkins | v1.33.1 | 19 Aug 24 12:05 UTC | 19 Aug 24 12:05 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:57:21
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:57:21.136622   85279 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:21.136750   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:21.136760   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:21.136766   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:21.136998   85279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 11:57:21.137687   85279 out.go:352] Setting JSON to false
	I0819 11:57:21.138582   85279 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5936,"bootTime":1724062705,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:57:21.138646   85279 start.go:139] virtualization: kvm guest
	I0819 11:57:21.140615   85279 out.go:177] * [addons-010148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:57:21.141875   85279 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 11:57:21.141892   85279 notify.go:220] Checking for updates...
	I0819 11:57:21.144168   85279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:21.145514   85279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:57:21.146717   85279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 11:57:21.147965   85279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:57:21.149077   85279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:21.150501   85279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:57:21.171324   85279 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:57:21.171432   85279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:21.215090   85279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:21.206969249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:21.215188   85279 docker.go:307] overlay module found
	I0819 11:57:21.216780   85279 out.go:177] * Using the docker driver based on user configuration
	I0819 11:57:21.217937   85279 start.go:297] selected driver: docker
	I0819 11:57:21.217959   85279 start.go:901] validating driver "docker" against <nil>
	I0819 11:57:21.217971   85279 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:21.218690   85279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:21.264361   85279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:21.254621157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:21.264552   85279 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:57:21.264753   85279 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:21.266335   85279 out.go:177] * Using Docker driver with root privileges
	I0819 11:57:21.267669   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:21.267687   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:21.267697   85279 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:57:21.267785   85279 start.go:340] cluster config:
	{Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:21.269229   85279 out.go:177] * Starting "addons-010148" primary control-plane node in "addons-010148" cluster
	I0819 11:57:21.270474   85279 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 11:57:21.271620   85279 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:57:21.272645   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:21.272678   85279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:57:21.272685   85279 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:21.272737   85279 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:57:21.272755   85279 preload.go:172] Found /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:57:21.272763   85279 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:57:21.273123   85279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json ...
	I0819 11:57:21.273153   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json: {Name:mk4719226a7e3df11c1f16a79f661e044f3c1059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:21.288181   85279 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:57:21.288324   85279 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:57:21.288345   85279 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 11:57:21.288356   85279 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 11:57:21.288369   85279 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 11:57:21.288380   85279 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 11:57:33.216374   85279 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 11:57:33.216425   85279 cache.go:194] Successfully downloaded all kic artifacts
	I0819 11:57:33.216468   85279 start.go:360] acquireMachinesLock for addons-010148: {Name:mk39b43a3047408d13d6bdd6d56728f128387755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:33.216569   85279 start.go:364] duration metric: took 78.486µs to acquireMachinesLock for "addons-010148"
	I0819 11:57:33.216593   85279 start.go:93] Provisioning new machine with config: &{Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:57:33.216721   85279 start.go:125] createHost starting for "" (driver="docker")
	I0819 11:57:33.218570   85279 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 11:57:33.218901   85279 start.go:159] libmachine.API.Create for "addons-010148" (driver="docker")
	I0819 11:57:33.218949   85279 client.go:168] LocalClient.Create starting
	I0819 11:57:33.219051   85279 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem
	I0819 11:57:33.329753   85279 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem
	I0819 11:57:33.652634   85279 cli_runner.go:164] Run: docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 11:57:33.667939   85279 cli_runner.go:211] docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 11:57:33.668026   85279 network_create.go:284] running [docker network inspect addons-010148] to gather additional debugging logs...
	I0819 11:57:33.668050   85279 cli_runner.go:164] Run: docker network inspect addons-010148
	W0819 11:57:33.684360   85279 cli_runner.go:211] docker network inspect addons-010148 returned with exit code 1
	I0819 11:57:33.684410   85279 network_create.go:287] error running [docker network inspect addons-010148]: docker network inspect addons-010148: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-010148 not found
	I0819 11:57:33.684424   85279 network_create.go:289] output of [docker network inspect addons-010148]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-010148 not found
	
	** /stderr **
	I0819 11:57:33.684559   85279 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:57:33.700349   85279 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001af0780}
	I0819 11:57:33.700391   85279 network_create.go:124] attempt to create docker network addons-010148 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 11:57:33.700434   85279 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-010148 addons-010148
	I0819 11:57:33.759438   85279 network_create.go:108] docker network addons-010148 192.168.49.0/24 created
	I0819 11:57:33.759477   85279 kic.go:121] calculated static IP "192.168.49.2" for the "addons-010148" container
	I0819 11:57:33.759548   85279 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 11:57:33.774214   85279 cli_runner.go:164] Run: docker volume create addons-010148 --label name.minikube.sigs.k8s.io=addons-010148 --label created_by.minikube.sigs.k8s.io=true
	I0819 11:57:33.790735   85279 oci.go:103] Successfully created a docker volume addons-010148
	I0819 11:57:33.790820   85279 cli_runner.go:164] Run: docker run --rm --name addons-010148-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --entrypoint /usr/bin/test -v addons-010148:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 11:57:38.771046   85279 cli_runner.go:217] Completed: docker run --rm --name addons-010148-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --entrypoint /usr/bin/test -v addons-010148:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (4.980166693s)
	I0819 11:57:38.771079   85279 oci.go:107] Successfully prepared a docker volume addons-010148
	I0819 11:57:38.771098   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:38.771122   85279 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 11:57:38.771175   85279 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-010148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 11:57:43.216217   85279 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-010148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.444984551s)
	I0819 11:57:43.216249   85279 kic.go:203] duration metric: took 4.445124389s to extract preloaded images to volume ...
	W0819 11:57:43.216377   85279 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 11:57:43.216470   85279 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 11:57:43.262225   85279 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-010148 --name addons-010148 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-010148 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-010148 --network addons-010148 --ip 192.168.49.2 --volume addons-010148:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 11:57:43.559161   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Running}}
	I0819 11:57:43.576723   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.595143   85279 cli_runner.go:164] Run: docker exec addons-010148 stat /var/lib/dpkg/alternatives/iptables
	I0819 11:57:43.642299   85279 oci.go:144] the created container "addons-010148" has a running status.
	I0819 11:57:43.642351   85279 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa...
	I0819 11:57:43.764017   85279 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 11:57:43.783776   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.802680   85279 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 11:57:43.802701   85279 kic_runner.go:114] Args: [docker exec --privileged addons-010148 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 11:57:43.847341   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:57:43.867122   85279 machine.go:93] provisionDockerMachine start ...
	I0819 11:57:43.867252   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:43.891643   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:43.891915   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:43.891958   85279 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:57:43.892684   85279 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49208->127.0.0.1:32768: read: connection reset by peer
	I0819 11:57:47.013311   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-010148
	
	I0819 11:57:47.013360   85279 ubuntu.go:169] provisioning hostname "addons-010148"
	I0819 11:57:47.013428   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.029891   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.030125   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.030141   85279 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-010148 && echo "addons-010148" | sudo tee /etc/hostname
	I0819 11:57:47.156889   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-010148
	
	I0819 11:57:47.156961   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.173004   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.173178   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.173194   85279 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-010148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-010148/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-010148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:57:47.289902   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:47.289943   85279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-77145/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-77145/.minikube}
	I0819 11:57:47.289972   85279 ubuntu.go:177] setting up certificates
	I0819 11:57:47.289994   85279 provision.go:84] configureAuth start
	I0819 11:57:47.290065   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:47.306442   85279 provision.go:143] copyHostCerts
	I0819 11:57:47.306512   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/key.pem (1675 bytes)
	I0819 11:57:47.306616   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/ca.pem (1078 bytes)
	I0819 11:57:47.306680   85279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-77145/.minikube/cert.pem (1123 bytes)
	I0819 11:57:47.306740   85279 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem org=jenkins.addons-010148 san=[127.0.0.1 192.168.49.2 addons-010148 localhost minikube]
	I0819 11:57:47.397769   85279 provision.go:177] copyRemoteCerts
	I0819 11:57:47.397833   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:57:47.397892   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.414888   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:47.502260   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:57:47.523404   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:57:47.543931   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 11:57:47.564286   85279 provision.go:87] duration metric: took 274.271486ms to configureAuth
	I0819 11:57:47.564314   85279 ubuntu.go:193] setting minikube options for container-runtime
	I0819 11:57:47.564500   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:47.564614   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.580919   85279 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:47.581105   85279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 11:57:47.581120   85279 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:57:47.782340   85279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:57:47.782366   85279 machine.go:96] duration metric: took 3.915219106s to provisionDockerMachine
	I0819 11:57:47.782397   85279 client.go:171] duration metric: took 14.563420774s to LocalClient.Create
	I0819 11:57:47.782435   85279 start.go:167] duration metric: took 14.563537451s to libmachine.API.Create "addons-010148"
	I0819 11:57:47.782449   85279 start.go:293] postStartSetup for "addons-010148" (driver="docker")
	I0819 11:57:47.782462   85279 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:57:47.782525   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:57:47.782566   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.798973   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:47.886333   85279 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:57:47.889344   85279 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 11:57:47.889371   85279 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 11:57:47.889380   85279 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 11:57:47.889387   85279 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 11:57:47.889397   85279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-77145/.minikube/addons for local assets ...
	I0819 11:57:47.889457   85279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-77145/.minikube/files for local assets ...
	I0819 11:57:47.889481   85279 start.go:296] duration metric: took 107.026975ms for postStartSetup
	I0819 11:57:47.889741   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:47.905971   85279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/config.json ...
	I0819 11:57:47.906208   85279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:57:47.906249   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:47.922551   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.006682   85279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 11:57:48.010676   85279 start.go:128] duration metric: took 14.793934489s to createHost
	I0819 11:57:48.010702   85279 start.go:83] releasing machines lock for "addons-010148", held for 14.794120907s
	I0819 11:57:48.010769   85279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-010148
	I0819 11:57:48.026566   85279 ssh_runner.go:195] Run: cat /version.json
	I0819 11:57:48.026623   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:48.026654   85279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:57:48.026798   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:57:48.043954   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.044151   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:57:48.125687   85279 ssh_runner.go:195] Run: systemctl --version
	I0819 11:57:48.129726   85279 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:57:48.265379   85279 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 11:57:48.269641   85279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:48.286470   85279 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 11:57:48.286555   85279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:48.311241   85279 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 11:57:48.311272   85279 start.go:495] detecting cgroup driver to use...
	I0819 11:57:48.311306   85279 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 11:57:48.311352   85279 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:57:48.324726   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:57:48.334309   85279 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:57:48.334357   85279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:57:48.346514   85279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:57:48.359243   85279 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:57:48.439099   85279 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:57:48.519943   85279 docker.go:233] disabling docker service ...
	I0819 11:57:48.520017   85279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:57:48.537984   85279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:57:48.548220   85279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:57:48.626622   85279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:57:48.712024   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:57:48.722192   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:57:48.736585   85279 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:57:48.736647   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.744908   85279 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:57:48.744975   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.753248   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.761586   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.770214   85279 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:57:48.778565   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.787708   85279 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.801481   85279 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:48.809887   85279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:57:48.816928   85279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:57:48.824306   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:48.898080   85279 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:57:48.997480   85279 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:57:48.997544   85279 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:57:49.000800   85279 start.go:563] Will wait 60s for crictl version
	I0819 11:57:49.000855   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:57:49.003889   85279 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:57:49.035659   85279 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 11:57:49.035743   85279 ssh_runner.go:195] Run: crio --version
	I0819 11:57:49.068930   85279 ssh_runner.go:195] Run: crio --version
	I0819 11:57:49.104451   85279 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 11:57:49.105924   85279 cli_runner.go:164] Run: docker network inspect addons-010148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:57:49.121695   85279 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 11:57:49.125297   85279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:49.135128   85279 kubeadm.go:883] updating cluster {Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:57:49.135256   85279 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:49.135300   85279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:49.199167   85279 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:49.199191   85279 crio.go:433] Images already preloaded, skipping extraction
	I0819 11:57:49.199239   85279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:49.230441   85279 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:49.230464   85279 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:57:49.230473   85279 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 11:57:49.230567   85279 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-010148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:57:49.230631   85279 ssh_runner.go:195] Run: crio config
	I0819 11:57:49.270887   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:49.270912   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:49.270927   85279 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:57:49.270964   85279 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-010148 NodeName:addons-010148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:57:49.271131   85279 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-010148"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:57:49.271209   85279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:57:49.279366   85279 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:57:49.279426   85279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:57:49.287416   85279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 11:57:49.303403   85279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:57:49.319344   85279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 11:57:49.335392   85279 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 11:57:49.338498   85279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:49.348268   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:49.423349   85279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:49.435952   85279 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148 for IP: 192.168.49.2
	I0819 11:57:49.435972   85279 certs.go:194] generating shared ca certs ...
	I0819 11:57:49.435990   85279 certs.go:226] acquiring lock for ca certs: {Name:mkba49214281fce7ee45fe1d9fdbc484fa0bf44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.436110   85279 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key
	I0819 11:57:49.496065   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt ...
	I0819 11:57:49.496094   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt: {Name:mk6262b0d88ceffd2b2b4bc4c54db54d0ae61c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.496260   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key ...
	I0819 11:57:49.496272   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key: {Name:mk27397098351b2ea59af7f0894194f89474b2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.496381   85279 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key
	I0819 11:57:49.628544   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt ...
	I0819 11:57:49.628576   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt: {Name:mk1216eab57117a403bfe709a4830a59d446e833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.628747   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key ...
	I0819 11:57:49.628757   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key: {Name:mkc45b17cf30c56bbb27a361cd5ecffecdf5065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.628824   85279 certs.go:256] generating profile certs ...
	I0819 11:57:49.628882   85279 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key
	I0819 11:57:49.628895   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt with IP's: []
	I0819 11:57:49.781863   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt ...
	I0819 11:57:49.781901   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: {Name:mk3e0630afee9742ed77d78f3e4835528ac4ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.782088   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key ...
	I0819 11:57:49.782099   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.key: {Name:mk6a306d20eeaacca346fea41bb9221251b42896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:49.782172   85279 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35
	I0819 11:57:49.782190   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 11:57:50.079427   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 ...
	I0819 11:57:50.079459   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35: {Name:mk3ae81cc233bbe6b0a93138939ebda0aa2e0358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.079619   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35 ...
	I0819 11:57:50.079635   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35: {Name:mk6b84dd14beae1eedc22ccb7cae1e000ce51c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.079706   85279 certs.go:381] copying /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt.41a0df35 -> /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt
	I0819 11:57:50.079777   85279 certs.go:385] copying /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key.41a0df35 -> /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key
	I0819 11:57:50.079823   85279 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key
	I0819 11:57:50.079837   85279 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt with IP's: []
	I0819 11:57:50.189323   85279 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt ...
	I0819 11:57:50.189354   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt: {Name:mkcacef18d5ca1277820725073144a71c6a38986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.189524   85279 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key ...
	I0819 11:57:50.189534   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key: {Name:mkfd5d83dbc372bad43edd8cce16667ad0eca786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:50.189699   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 11:57:50.189733   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/ca.pem (1078 bytes)
	I0819 11:57:50.189759   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:57:50.189783   85279 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-77145/.minikube/certs/key.pem (1675 bytes)
	I0819 11:57:50.190418   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:57:50.213394   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:57:50.236665   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:57:50.257595   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:57:50.278202   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 11:57:50.298740   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:57:50.318777   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:57:50.339057   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:57:50.359644   85279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:57:50.380625   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:57:50.396065   85279 ssh_runner.go:195] Run: openssl version
	I0819 11:57:50.400845   85279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:57:50.409352   85279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.412351   85279 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.412397   85279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:50.418550   85279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:57:50.426499   85279 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:57:50.429312   85279 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:57:50.429358   85279 kubeadm.go:392] StartCluster: {Name:addons-010148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-010148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:50.429434   85279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:57:50.429469   85279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:57:50.461670   85279 cri.go:89] found id: ""
	I0819 11:57:50.461728   85279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:57:50.469688   85279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:57:50.477425   85279 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 11:57:50.477479   85279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:57:50.485041   85279 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:57:50.485058   85279 kubeadm.go:157] found existing configuration files:
	
	I0819 11:57:50.485103   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:57:50.492686   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:57:50.492727   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:57:50.500025   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:57:50.507776   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:57:50.507823   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:57:50.515035   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:57:50.522812   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:57:50.522867   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:57:50.530313   85279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:57:50.537588   85279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:57:50.537639   85279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:57:50.544852   85279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 11:57:50.577292   85279 kubeadm.go:310] W0819 11:57:50.576512    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:50.577806   85279 kubeadm.go:310] W0819 11:57:50.577287    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:50.595300   85279 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 11:57:50.643330   85279 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:57:59.131667   85279 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:57:59.131741   85279 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:57:59.131856   85279 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 11:57:59.131963   85279 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 11:57:59.132000   85279 kubeadm.go:310] OS: Linux
	I0819 11:57:59.132066   85279 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 11:57:59.132130   85279 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 11:57:59.132198   85279 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 11:57:59.132263   85279 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 11:57:59.132329   85279 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 11:57:59.132424   85279 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 11:57:59.132489   85279 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 11:57:59.132534   85279 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 11:57:59.132597   85279 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 11:57:59.132693   85279 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:57:59.132834   85279 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:57:59.132977   85279 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:57:59.133073   85279 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:57:59.134942   85279 out.go:235]   - Generating certificates and keys ...
	I0819 11:57:59.135018   85279 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:57:59.135075   85279 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:57:59.135144   85279 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:57:59.135190   85279 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:57:59.135258   85279 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:57:59.135326   85279 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:57:59.135394   85279 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:57:59.135518   85279 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-010148 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:57:59.135592   85279 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:57:59.135726   85279 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-010148 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:57:59.135783   85279 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:57:59.135835   85279 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:57:59.135873   85279 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:57:59.135916   85279 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:57:59.135957   85279 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:57:59.136005   85279 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:57:59.136050   85279 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:57:59.136101   85279 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:57:59.136176   85279 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:57:59.136258   85279 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:57:59.136331   85279 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:57:59.137794   85279 out.go:235]   - Booting up control plane ...
	I0819 11:57:59.137896   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:57:59.137969   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:57:59.138025   85279 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:57:59.138108   85279 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:57:59.138183   85279 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:57:59.138222   85279 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:57:59.138332   85279 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:57:59.138426   85279 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:57:59.138475   85279 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.774421ms
	I0819 11:57:59.138535   85279 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:57:59.138591   85279 kubeadm.go:310] [api-check] The API server is healthy after 4.502162972s
	I0819 11:57:59.138719   85279 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:57:59.138876   85279 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:57:59.138937   85279 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:57:59.139082   85279 kubeadm.go:310] [mark-control-plane] Marking the node addons-010148 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:57:59.139136   85279 kubeadm.go:310] [bootstrap-token] Using token: ivphnl.4siv2zo7antv26ew
	I0819 11:57:59.140646   85279 out.go:235]   - Configuring RBAC rules ...
	I0819 11:57:59.140750   85279 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:57:59.140819   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:57:59.140980   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:57:59.141191   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:57:59.141357   85279 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:57:59.141488   85279 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:57:59.141653   85279 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:57:59.141692   85279 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:57:59.141731   85279 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:57:59.141740   85279 kubeadm.go:310] 
	I0819 11:57:59.141794   85279 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:57:59.141801   85279 kubeadm.go:310] 
	I0819 11:57:59.141889   85279 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:57:59.141909   85279 kubeadm.go:310] 
	I0819 11:57:59.141950   85279 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:57:59.142022   85279 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:57:59.142084   85279 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:57:59.142094   85279 kubeadm.go:310] 
	I0819 11:57:59.142166   85279 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:57:59.142175   85279 kubeadm.go:310] 
	I0819 11:57:59.142241   85279 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:57:59.142250   85279 kubeadm.go:310] 
	I0819 11:57:59.142316   85279 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:57:59.142378   85279 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:57:59.142444   85279 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:57:59.142453   85279 kubeadm.go:310] 
	I0819 11:57:59.142520   85279 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:57:59.142588   85279 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:57:59.142597   85279 kubeadm.go:310] 
	I0819 11:57:59.142666   85279 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivphnl.4siv2zo7antv26ew \
	I0819 11:57:59.142752   85279 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbed9ee220740d41455e00aa7089abcb0e7d638dbb25406c98dd05f5405a9fed \
	I0819 11:57:59.142785   85279 kubeadm.go:310] 	--control-plane 
	I0819 11:57:59.142794   85279 kubeadm.go:310] 
	I0819 11:57:59.142910   85279 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:57:59.142929   85279 kubeadm.go:310] 
	I0819 11:57:59.142999   85279 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivphnl.4siv2zo7antv26ew \
	I0819 11:57:59.143154   85279 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbed9ee220740d41455e00aa7089abcb0e7d638dbb25406c98dd05f5405a9fed 
	I0819 11:57:59.143172   85279 cni.go:84] Creating CNI manager for ""
	I0819 11:57:59.143183   85279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:59.144692   85279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 11:57:59.146040   85279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 11:57:59.150121   85279 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 11:57:59.150147   85279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 11:57:59.166393   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 11:57:59.350258   85279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:57:59.350363   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:59.350408   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-010148 minikube.k8s.io/updated_at=2024_08_19T11_57_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=addons-010148 minikube.k8s.io/primary=true
	I0819 11:57:59.357992   85279 ops.go:34] apiserver oom_adj: -16
	I0819 11:57:59.457588   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:59.957661   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:00.457681   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:00.957619   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:01.457824   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:01.957982   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:02.458209   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:02.958396   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:03.458453   85279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:58:03.523537   85279 kubeadm.go:1113] duration metric: took 4.173248662s to wait for elevateKubeSystemPrivileges
	I0819 11:58:03.523573   85279 kubeadm.go:394] duration metric: took 13.094219652s to StartCluster
	I0819 11:58:03.523602   85279 settings.go:142] acquiring lock: {Name:mk516bc3d1226b2b31d897fcb99c3d41b4827cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:58:03.523746   85279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:58:03.524179   85279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-77145/kubeconfig: {Name:mk37d44a49445dbad6d9c9218733c895ba35a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:58:03.524402   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:58:03.524405   85279 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:58:03.524630   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:58:03.524573   85279 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 11:58:03.524688   85279 addons.go:69] Setting default-storageclass=true in profile "addons-010148"
	I0819 11:58:03.524712   85279 addons.go:69] Setting yakd=true in profile "addons-010148"
	I0819 11:58:03.524742   85279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-010148"
	I0819 11:58:03.524753   85279 addons.go:234] Setting addon yakd=true in "addons-010148"
	I0819 11:58:03.524741   85279 addons.go:69] Setting metrics-server=true in profile "addons-010148"
	I0819 11:58:03.524765   85279 addons.go:69] Setting storage-provisioner=true in profile "addons-010148"
	I0819 11:58:03.524787   85279 addons.go:234] Setting addon metrics-server=true in "addons-010148"
	I0819 11:58:03.524794   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524798   85279 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-010148"
	I0819 11:58:03.524816   85279 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-010148"
	I0819 11:58:03.524808   85279 addons.go:69] Setting cloud-spanner=true in profile "addons-010148"
	I0819 11:58:03.524822   85279 addons.go:69] Setting ingress=true in profile "addons-010148"
	I0819 11:58:03.524839   85279 addons.go:69] Setting ingress-dns=true in profile "addons-010148"
	I0819 11:58:03.524849   85279 addons.go:234] Setting addon cloud-spanner=true in "addons-010148"
	I0819 11:58:03.524849   85279 addons.go:69] Setting registry=true in profile "addons-010148"
	I0819 11:58:03.524855   85279 addons.go:234] Setting addon ingress=true in "addons-010148"
	I0819 11:58:03.524859   85279 addons.go:234] Setting addon ingress-dns=true in "addons-010148"
	I0819 11:58:03.524869   85279 addons.go:234] Setting addon registry=true in "addons-010148"
	I0819 11:58:03.524860   85279 addons.go:69] Setting inspektor-gadget=true in profile "addons-010148"
	I0819 11:58:03.524884   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524890   85279 addons.go:69] Setting gcp-auth=true in profile "addons-010148"
	I0819 11:58:03.524893   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524830   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524907   85279 mustload.go:65] Loading cluster: addons-010148
	I0819 11:58:03.524906   85279 addons.go:234] Setting addon inspektor-gadget=true in "addons-010148"
	I0819 11:58:03.524975   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.525067   85279 config.go:182] Loaded profile config "addons-010148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:58:03.525116   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525138   85279 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-010148"
	I0819 11:58:03.525204   85279 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-010148"
	I0819 11:58:03.525244   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.525299   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525328   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525347   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525359   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525399   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.525474   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.524840   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.526267   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.524884   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524893   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.526609   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.527563   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.527913   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.528535   85279 out.go:177] * Verifying Kubernetes components...
	I0819 11:58:03.526628   85279 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-010148"
	I0819 11:58:03.528930   85279 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-010148"
	I0819 11:58:03.526642   85279 addons.go:69] Setting volcano=true in profile "addons-010148"
	I0819 11:58:03.529002   85279 addons.go:234] Setting addon volcano=true in "addons-010148"
	I0819 11:58:03.529047   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.524790   85279 addons.go:234] Setting addon storage-provisioner=true in "addons-010148"
	I0819 11:58:03.529158   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.529506   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.529600   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.526728   85279 addons.go:69] Setting helm-tiller=true in profile "addons-010148"
	I0819 11:58:03.530067   85279 addons.go:234] Setting addon helm-tiller=true in "addons-010148"
	I0819 11:58:03.530105   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.530188   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.526815   85279 addons.go:69] Setting volumesnapshots=true in profile "addons-010148"
	I0819 11:58:03.530675   85279 addons.go:234] Setting addon volumesnapshots=true in "addons-010148"
	I0819 11:58:03.530837   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.530771   85279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:58:03.552220   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.554021   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.570872   85279 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 11:58:03.571967   85279 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 11:58:03.571990   85279 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 11:58:03.572085   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.575871   85279 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 11:58:03.576063   85279 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 11:58:03.577137   85279 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:58:03.577156   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 11:58:03.577211   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.577495   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 11:58:03.577511   85279 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 11:58:03.577573   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	W0819 11:58:03.581707   85279 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 11:58:03.594551   85279 addons.go:234] Setting addon default-storageclass=true in "addons-010148"
	I0819 11:58:03.594598   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.595148   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.595785   85279 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 11:58:03.596016   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.597209   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 11:58:03.598552   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 11:58:03.599543   85279 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 11:58:03.600447   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 11:58:03.600555   85279 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 11:58:03.600571   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 11:58:03.600629   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.603590   85279 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 11:58:03.603761   85279 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 11:58:03.603821   85279 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 11:58:03.605008   85279 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:58:03.605028   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 11:58:03.605081   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.605590   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 11:58:03.605609   85279 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 11:58:03.605662   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.606016   85279 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 11:58:03.606032   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 11:58:03.606080   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.606103   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 11:58:03.607709   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:03.607866   85279 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 11:58:03.608909   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:03.609097   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 11:58:03.609110   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 11:58:03.609154   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.609374   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 11:58:03.610324   85279 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:58:03.610341   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 11:58:03.610383   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.612776   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 11:58:03.614117   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 11:58:03.615320   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 11:58:03.616424   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 11:58:03.617474   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 11:58:03.617500   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 11:58:03.617557   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.632178   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.638867   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.642802   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.644942   85279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:58:03.645013   85279 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 11:58:03.645377   85279 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-010148"
	I0819 11:58:03.645418   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:03.645900   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:03.652367   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 11:58:03.652416   85279 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 11:58:03.652492   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.653131   85279 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:58:03.653158   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:58:03.653219   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.663852   85279 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:58:03.663875   85279 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:58:03.664017   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.664342   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.677286   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.679523   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.679903   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.685518   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.692439   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:58:03.694560   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.697743   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.700395   85279 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 11:58:03.701703   85279 out.go:177]   - Using image docker.io/busybox:stable
	I0819 11:58:03.703308   85279 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:58:03.703328   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 11:58:03.703379   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:03.708456   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.713610   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.714424   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.720144   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:03.753005   85279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:58:04.044023   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 11:58:04.044053   85279 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 11:58:04.048409   85279 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 11:58:04.048526   85279 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 11:58:04.053674   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:58:04.143290   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 11:58:04.143326   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 11:58:04.144183   85279 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 11:58:04.144204   85279 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 11:58:04.163520   85279 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 11:58:04.163549   85279 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 11:58:04.248398   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:58:04.248626   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 11:58:04.248641   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 11:58:04.252912   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 11:58:04.252939   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 11:58:04.256803   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 11:58:04.343462   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:58:04.354363   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:58:04.355468   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 11:58:04.355492   85279 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 11:58:04.358075   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 11:58:04.358098   85279 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 11:58:04.360111   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 11:58:04.360134   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 11:58:04.446959   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:58:04.452287   85279 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 11:58:04.452364   85279 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 11:58:04.458850   85279 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:58:04.458924   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 11:58:04.459795   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:58:04.460304   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 11:58:04.460370   85279 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 11:58:04.462906   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 11:58:04.462944   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 11:58:04.556245   85279 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:58:04.556297   85279 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 11:58:04.663727   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 11:58:04.663814   85279 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 11:58:04.667093   85279 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:58:04.667115   85279 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 11:58:04.743367   85279 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 11:58:04.743458   85279 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 11:58:04.762675   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:58:04.854848   85279 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 11:58:04.854968   85279 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 11:58:04.863616   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 11:58:04.863741   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 11:58:04.952241   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:58:04.960441   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 11:58:04.960541   85279 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 11:58:04.963080   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:58:05.055387   85279 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:58:05.055489   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 11:58:05.143560   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 11:58:05.143663   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 11:58:05.158672   85279 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 11:58:05.158764   85279 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 11:58:05.255118   85279 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 11:58:05.255161   85279 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 11:58:05.450346   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:58:05.459505   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 11:58:05.459595   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 11:58:05.543719   85279 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:05.543804   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 11:58:05.566626   85279 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 11:58:05.566725   85279 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 11:58:05.762729   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 11:58:05.762780   85279 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 11:58:05.847814   85279 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.15533019s)
	I0819 11:58:05.847904   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.794152603s)
	I0819 11:58:05.847915   85279 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 11:58:05.849280   85279 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.094779949s)
	I0819 11:58:05.850276   85279 node_ready.go:35] waiting up to 6m0s for node "addons-010148" to be "Ready" ...
	I0819 11:58:05.944195   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:05.948339   85279 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:58:05.948377   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 11:58:05.963260   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 11:58:05.963309   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 11:58:06.242564   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:58:06.442710   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 11:58:06.442753   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 11:58:06.548392   85279 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-010148" context rescaled to 1 replicas
	I0819 11:58:06.845641   85279 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:58:06.845723   85279 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 11:58:07.058722   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:58:07.865988   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:08.363167   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.11473133s)
	I0819 11:58:08.363278   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.106433769s)
	I0819 11:58:08.563204   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.219693793s)
	I0819 11:58:08.563325   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.208929254s)
	W0819 11:58:08.846823   85279 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 11:58:10.065207   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.618143016s)
	I0819 11:58:10.065370   85279 addons.go:475] Verifying addon ingress=true in "addons-010148"
	I0819 11:58:10.065459   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.302692184s)
	I0819 11:58:10.065817   85279 addons.go:475] Verifying addon registry=true in "addons-010148"
	I0819 11:58:10.065512   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.113171891s)
	I0819 11:58:10.065583   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.102414349s)
	I0819 11:58:10.066106   85279 addons.go:475] Verifying addon metrics-server=true in "addons-010148"
	I0819 11:58:10.065636   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.615196702s)
	I0819 11:58:10.066002   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.605550401s)
	I0819 11:58:10.067563   85279 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-010148 service yakd-dashboard -n yakd-dashboard
	
	I0819 11:58:10.067571   85279 out.go:177] * Verifying ingress addon...
	I0819 11:58:10.067569   85279 out.go:177] * Verifying registry addon...
	I0819 11:58:10.070102   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 11:58:10.070132   85279 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 11:58:10.146145   85279 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:58:10.146186   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.146373   85279 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 11:58:10.146395   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.353673   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:10.573689   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.574335   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.847077   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 11:58:10.847175   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:10.874939   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:10.882821   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.93850512s)
	W0819 11:58:10.882874   85279 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:58:10.882901   85279 retry.go:31] will retry after 261.657823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:58:10.882909   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.64029641s)
	I0819 11:58:11.062243   85279 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 11:58:11.073186   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.073946   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.083051   85279 addons.go:234] Setting addon gcp-auth=true in "addons-010148"
	I0819 11:58:11.083109   85279 host.go:66] Checking if "addons-010148" exists ...
	I0819 11:58:11.083622   85279 cli_runner.go:164] Run: docker container inspect addons-010148 --format={{.State.Status}}
	I0819 11:58:11.100216   85279 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 11:58:11.100288   85279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-010148
	I0819 11:58:11.117931   85279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/addons-010148/id_rsa Username:docker}
	I0819 11:58:11.145054   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:58:11.577700   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.577978   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.519032792s)
	I0819 11:58:11.578022   85279 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-010148"
	I0819 11:58:11.578332   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.579711   85279 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 11:58:11.581555   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 11:58:11.645407   85279 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:58:11.645436   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.073710   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.074252   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.085243   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.573247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.573578   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.584687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.853168   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:13.074054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.074513   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.084331   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.574224   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.575651   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.646401   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.146385   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.148003   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.148812   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.573562   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.574229   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.585222   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.591560   85279 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.491313064s)
	I0819 11:58:14.591558   85279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.446450312s)
	I0819 11:58:14.593794   85279 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:58:14.595133   85279 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 11:58:14.596331   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 11:58:14.596355   85279 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 11:58:14.651283   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 11:58:14.651317   85279 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 11:58:14.668580   85279 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:58:14.668601   85279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 11:58:14.685265   85279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:58:14.854424   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:15.073334   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.074107   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.085616   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.266361   85279 addons.go:475] Verifying addon gcp-auth=true in "addons-010148"
	I0819 11:58:15.267861   85279 out.go:177] * Verifying gcp-auth addon...
	I0819 11:58:15.270081   85279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 11:58:15.273247   85279 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:58:15.273265   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.574527   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.574810   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.585119   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.773978   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.074218   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.075274   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.084189   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.273274   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.574073   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.574761   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.585062   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.773374   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.073931   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.074614   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.084754   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.273336   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.353765   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:17.574050   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.574448   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.584612   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.773301   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.073960   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.074439   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.084408   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.273489   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.573785   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.574310   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.584646   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.773293   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.073571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.074181   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.085141   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.274053   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.573054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.573624   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.584604   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.772932   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.853490   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:20.072997   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:20.073311   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.084628   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.273086   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.573226   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:20.573638   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.584475   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.772860   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.075029   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:21.075332   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.084237   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.273318   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.573775   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:21.574378   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.584124   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.773959   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.072966   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:22.073546   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.084269   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.273682   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.352999   85279 node_ready.go:53] node "addons-010148" has status "Ready":"False"
	I0819 11:58:22.573819   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:22.574505   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.584467   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.772669   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.866753   85279 node_ready.go:49] node "addons-010148" has status "Ready":"True"
	I0819 11:58:22.866777   85279 node_ready.go:38] duration metric: took 17.016472351s for node "addons-010148" to be "Ready" ...
	I0819 11:58:22.866788   85279 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:58:22.952525   85279 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:23.074159   85279 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:58:23.074188   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:23.074466   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.085436   85279 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:58:23.085458   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.274353   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.574933   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:23.575714   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.677936   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.776342   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.074282   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:24.074640   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.085380   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.273584   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.575036   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:24.575937   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.644692   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.844812   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.958558   85279 pod_ready.go:93] pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.958583   85279 pod_ready.go:82] duration metric: took 2.005961009s for pod "coredns-6f6b679f8f-7mkcm" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.958610   85279 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.963913   85279 pod_ready.go:93] pod "etcd-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.963940   85279 pod_ready.go:82] duration metric: took 5.321705ms for pod "etcd-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.963964   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.968664   85279 pod_ready.go:93] pod "kube-apiserver-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.968688   85279 pod_ready.go:82] duration metric: took 4.715827ms for pod "kube-apiserver-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.968700   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.973751   85279 pod_ready.go:93] pod "kube-controller-manager-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.973817   85279 pod_ready.go:82] duration metric: took 5.10758ms for pod "kube-controller-manager-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.973866   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-94dm9" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.978074   85279 pod_ready.go:93] pod "kube-proxy-94dm9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:24.978092   85279 pod_ready.go:82] duration metric: took 4.217561ms for pod "kube-proxy-94dm9" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:24.978100   85279 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.075373   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:25.075946   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.086554   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.273964   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.356626   85279 pod_ready.go:93] pod "kube-scheduler-addons-010148" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:25.356652   85279 pod_ready.go:82] duration metric: took 378.544376ms for pod "kube-scheduler-addons-010148" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.356665   85279 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:25.574003   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:25.574715   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.585448   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.772972   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.074485   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:26.074791   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.085169   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.274042   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.573859   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:26.574400   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.585683   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.774154   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.076247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:27.076618   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.085629   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.273687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.362348   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:27.573774   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:27.574127   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.585382   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.773768   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.073522   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:28.073744   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.085244   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.273675   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.574179   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:28.574327   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.585237   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.774846   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.074090   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:29.074197   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.086374   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.273717   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.362940   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:29.574904   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:29.575412   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.585815   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.773756   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.074017   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:30.074361   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.086135   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.273163   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.574558   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:30.574907   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.585533   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.773250   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.074149   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:31.074359   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.085982   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.273747   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.363556   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:31.573908   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:31.574117   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.586272   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.775979   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.074268   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:32.074444   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.148479   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.345255   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.574185   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:32.574473   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.586363   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.773386   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.074453   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:33.074628   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.085214   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.272894   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.574662   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:33.575173   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.586177   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.773579   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.863290   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:34.074084   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:34.074348   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.085011   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.273066   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.574250   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:34.574736   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.585459   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.773993   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.074370   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:35.074692   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.085013   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.273561   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.574296   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:35.574467   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.585136   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.773459   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.074319   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:36.074551   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.085553   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.274051   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.362686   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:36.573827   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:36.575025   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.585410   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.773003   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.074247   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:37.074527   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.085608   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.273612   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.574018   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:37.574286   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.585633   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.773497   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.074326   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:38.074873   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.084974   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.272932   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.362854   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:38.574325   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:38.574473   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.585819   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.774712   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.073623   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:39.074240   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.086337   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.274059   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.574201   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:39.574606   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.585550   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.773641   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.074949   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:40.077199   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.085640   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.273898   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.363056   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:40.574499   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:40.574731   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.584962   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.773093   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.074071   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:41.074459   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.085075   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.274827   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.573871   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:41.574112   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.585698   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.773413   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.073823   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:42.074022   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.085519   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.273788   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.573911   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:42.574141   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.585663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.773730   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.862617   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:43.073635   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:43.074306   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.085918   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.272888   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.574217   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:43.574563   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.587193   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.773801   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.074523   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:44.074771   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.175964   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.273599   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.573978   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:44.574333   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.586476   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.773349   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.073929   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:45.074232   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.086742   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:45.273379   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.363310   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:45.574868   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:45.575475   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.585755   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:45.773769   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.075196   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.075771   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:46.086104   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:46.274113   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.574716   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:46.576222   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.586108   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:46.773784   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.074894   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:47.075293   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.085601   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:47.273748   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.574190   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:47.574751   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.585686   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:47.773319   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.861809   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:48.074222   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:48.074822   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.085796   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:48.273405   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.574373   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:48.574457   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.584792   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:48.774026   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.074171   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:49.074795   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.085338   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:49.273337   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.573974   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:49.574277   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.585784   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:49.773036   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.073975   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:50.074327   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.085255   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:50.274138   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.362548   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:50.574420   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:50.574868   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.585409   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:50.773534   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.074347   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:51.074643   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.086049   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:51.273714   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.574032   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:51.574377   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.586082   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:51.773221   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.074690   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:52.074947   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.085686   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:52.273889   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.362739   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:52.574054   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.574058   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:52.585606   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:52.773574   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.074153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:53.074479   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.084668   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:53.274033   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.573993   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:53.574399   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.585755   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:53.772727   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.074791   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:54.075304   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.085798   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:54.273749   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.363429   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:54.573655   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:54.574106   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.586460   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:54.773569   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.074948   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:55.075519   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.087071   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:55.274207   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.574220   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:55.574403   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.585174   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:55.773511   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.074429   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:56.074502   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.086066   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:56.273774   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.574272   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:56.574895   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.585703   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:56.774010   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.863670   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:57.074244   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:57.074445   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.084897   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:57.272688   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.573593   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:57.573820   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.585437   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:57.774103   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.074573   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:58.074884   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.085469   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:58.273477   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.573531   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:58.573718   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.585282   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:58.773522   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.074280   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:59.074691   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:59.085379   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:59.273609   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.362305   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:59.574795   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:59.574942   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:59.585694   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:59.773600   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.074135   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:00.074592   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:00.085548   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:00.273767   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.576763   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:00.577089   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:00.585564   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:00.773570   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.074111   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:01.074437   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:01.085039   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:01.274178   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.574335   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:01.574675   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:01.585428   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:01.773622   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:01.864751   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:02.075581   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:02.077776   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:02.145610   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:02.273361   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:02.574313   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:02.574489   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:02.646128   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:02.773914   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:03.146552   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:03.148093   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:03.149764   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:03.362425   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:03.648995   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:03.650577   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:03.651236   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:03.774306   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:04.074462   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:04.075028   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:04.086328   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:04.274308   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:04.362828   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:04.574238   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:04.574878   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:04.585243   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:04.773489   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:05.074980   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:05.075433   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:05.086368   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:05.273996   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:05.573788   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:05.574373   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:05.586566   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:05.773166   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.074984   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:06.075224   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:06.085583   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:06.274153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.574153   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:06.574481   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:06.586388   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:06.773913   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:06.863282   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:07.074204   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:07.074651   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:07.085316   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:07.273407   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:07.574413   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:07.574680   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:07.585645   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:07.773360   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:08.073998   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:08.074333   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:08.086043   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:08.273304   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:08.574210   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:08.574626   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:08.585305   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:08.773621   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:09.074050   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:09.074186   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:09.086046   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:09.273042   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:09.362305   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:09.573567   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:09.573773   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:09.585879   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:09.773435   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:10.074571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:10.074912   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:10.085898   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:10.273645   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:10.573787   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:10.574154   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:10.585766   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:10.773685   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:11.074401   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:11.074929   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:11.085830   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:11.273557   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:11.362500   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:11.573663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:11.574004   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:11.585571   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:11.773786   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:12.074768   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:12.075032   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:12.085541   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:12.273965   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:12.576687   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:12.577745   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:12.586393   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:12.774465   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:13.074293   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:13.074812   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:13.086015   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:13.273423   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:13.362848   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:13.574508   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:13.574899   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:13.585582   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:13.773977   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:14.074058   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:14.074456   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:14.086758   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:14.273591   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:14.574278   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:59:14.574800   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:14.676079   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:14.773054   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.074461   85279 kapi.go:107] duration metric: took 1m5.004356228s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 11:59:15.074944   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:15.085795   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:15.273533   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.574696   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:15.585178   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:15.773400   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:15.862354   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:16.073962   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:16.085534   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:16.274067   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:16.574575   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:16.585663   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:16.774018   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.074443   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:17.084959   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:17.272919   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.574094   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:17.585702   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:17.773971   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:17.863412   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:18.074563   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:18.085444   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:18.273668   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:18.574645   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:18.585912   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:18.774433   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:19.075073   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:19.085538   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:19.274057   85279 kapi.go:107] duration metric: took 1m4.003971297s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 11:59:19.276134   85279 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-010148 cluster.
	I0819 11:59:19.277658   85279 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 11:59:19.343377   85279 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 11:59:19.646459   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:19.649315   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:19.875860   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:20.147006   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:20.147221   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:20.646707   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:20.650208   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:21.149789   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:21.150696   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:21.575272   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:21.586422   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:22.074445   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:22.087116   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:22.362976   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:22.574631   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:22.586465   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:23.073768   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:23.086457   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:23.574273   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:23.586712   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:24.075347   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:24.086242   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:24.363254   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:24.575305   85279 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:59:24.585759   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:25.075702   85279 kapi.go:107] duration metric: took 1m15.005562553s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 11:59:25.087543   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:25.586404   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.147055   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.585775   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:26.862636   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:27.086673   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:27.585634   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.086365   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.586470   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:28.863337   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:29.086235   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:29.586565   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:30.087169   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:30.588245   85279 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:59:31.085709   85279 kapi.go:107] duration metric: took 1m19.504149657s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 11:59:31.087448   85279 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, helm-tiller, metrics-server, ingress-dns, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0819 11:59:31.088598   85279 addons.go:510] duration metric: took 1m27.564093654s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher helm-tiller metrics-server ingress-dns yakd inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0819 11:59:31.362282   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:33.362500   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:35.862539   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:38.362679   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:40.362791   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:42.861878   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:44.863531   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:47.362674   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:49.863265   85279 pod_ready.go:103] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:59:50.862432   85279 pod_ready.go:93] pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace has status "Ready":"True"
	I0819 11:59:50.862455   85279 pod_ready.go:82] duration metric: took 1m25.505781989s for pod "metrics-server-8988944d9-phfcl" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.862464   85279 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.866256   85279 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace has status "Ready":"True"
	I0819 11:59:50.866273   85279 pod_ready.go:82] duration metric: took 3.803358ms for pod "nvidia-device-plugin-daemonset-9gfqj" in "kube-system" namespace to be "Ready" ...
	I0819 11:59:50.866290   85279 pod_ready.go:39] duration metric: took 1m27.999491178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:59:50.866345   85279 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:59:50.866382   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 11:59:50.866431   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 11:59:50.901427   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 11:59:50.901448   85279 cri.go:89] found id: ""
	I0819 11:59:50.901463   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 11:59:50.901521   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.904691   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 11:59:50.904752   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 11:59:50.940456   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 11:59:50.940478   85279 cri.go:89] found id: ""
	I0819 11:59:50.940486   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 11:59:50.940535   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.943807   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 11:59:50.943872   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 11:59:50.977418   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 11:59:50.977443   85279 cri.go:89] found id: ""
	I0819 11:59:50.977450   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 11:59:50.977504   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:50.981023   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 11:59:50.981074   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 11:59:51.013426   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 11:59:51.013446   85279 cri.go:89] found id: ""
	I0819 11:59:51.013453   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 11:59:51.013503   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.016662   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 11:59:51.016727   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 11:59:51.052911   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 11:59:51.052930   85279 cri.go:89] found id: ""
	I0819 11:59:51.052938   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 11:59:51.052998   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.056280   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 11:59:51.056356   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 11:59:51.091967   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 11:59:51.091993   85279 cri.go:89] found id: ""
	I0819 11:59:51.092003   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 11:59:51.092061   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.095684   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 11:59:51.095760   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 11:59:51.163716   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 11:59:51.163735   85279 cri.go:89] found id: ""
	I0819 11:59:51.163743   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 11:59:51.163790   85279 ssh_runner.go:195] Run: which crictl
	I0819 11:59:51.166958   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 11:59:51.166979   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 11:59:51.230940   85279 logs.go:123] Gathering logs for container status ...
	I0819 11:59:51.230986   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:59:51.285488   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 11:59:51.285524   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 11:59:51.356531   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 11:59:51.356564   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 11:59:51.393549   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:59:51.393591   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:59:51.576925   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 11:59:51.576956   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 11:59:51.620992   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 11:59:51.621028   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 11:59:51.663801   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 11:59:51.663847   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 11:59:51.697290   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 11:59:51.697322   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 11:59:51.734603   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 11:59:51.734633   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 11:59:51.807841   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 11:59:51.807880   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 11:59:51.829390   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 11:59:51.829586   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 11:59:51.876543   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 11:59:51.876586   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:59:51.895329   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:59:51.895360   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 11:59:51.895421   85279 out.go:270] X Problems detected in kubelet:
	W0819 11:59:51.895436   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 11:59:51.895445   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 11:59:51.895457   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 11:59:51.895463   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:01.896246   85279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:00:01.910582   85279 api_server.go:72] duration metric: took 1m58.386144751s to wait for apiserver process to appear ...
	I0819 12:00:01.910613   85279 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:00:01.910677   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:00:01.910746   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:00:01.946693   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:01.946720   85279 cri.go:89] found id: ""
	I0819 12:00:01.946731   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 12:00:01.946797   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:01.950769   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 12:00:01.950854   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:00:01.987423   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:01.987452   85279 cri.go:89] found id: ""
	I0819 12:00:01.987464   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 12:00:01.987519   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:01.991034   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 12:00:01.991110   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:00:02.026336   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:02.026368   85279 cri.go:89] found id: ""
	I0819 12:00:02.026379   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 12:00:02.026429   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.030051   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:00:02.030117   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:00:02.065326   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:02.065348   85279 cri.go:89] found id: ""
	I0819 12:00:02.065355   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 12:00:02.065405   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.069014   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:00:02.069076   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:00:02.105104   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:02.105127   85279 cri.go:89] found id: ""
	I0819 12:00:02.105134   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 12:00:02.105184   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.108778   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:00:02.108859   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:00:02.146311   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:02.146338   85279 cri.go:89] found id: ""
	I0819 12:00:02.146349   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 12:00:02.146403   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.150168   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 12:00:02.150233   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:00:02.186023   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:02.186045   85279 cri.go:89] found id: ""
	I0819 12:00:02.186052   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 12:00:02.186102   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:02.189602   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 12:00:02.189629   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:00:02.210345   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:02.210521   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:02.259643   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 12:00:02.259687   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:02.301302   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 12:00:02.301343   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:02.335269   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 12:00:02.335299   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:02.393810   85279 logs.go:123] Gathering logs for container status ...
	I0819 12:00:02.393873   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:00:02.440020   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 12:00:02.440057   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:00:02.461096   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:00:02.461135   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:00:02.564652   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 12:00:02.564688   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:02.610828   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 12:00:02.610871   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:02.646438   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 12:00:02.646471   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:02.685731   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 12:00:02.685765   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:02.728312   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 12:00:02.728352   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 12:00:02.808759   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:02.808802   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:00:02.808871   85279 out.go:270] X Problems detected in kubelet:
	W0819 12:00:02.808883   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:02.808893   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:02.808906   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:02.808911   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:12.809142   85279 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 12:00:12.812957   85279 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 12:00:12.813943   85279 api_server.go:141] control plane version: v1.31.0
	I0819 12:00:12.813967   85279 api_server.go:131] duration metric: took 10.903346298s to wait for apiserver health ...
	I0819 12:00:12.813977   85279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:00:12.814006   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:00:12.814066   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:00:12.848238   85279 cri.go:89] found id: "36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:12.848260   85279 cri.go:89] found id: ""
	I0819 12:00:12.848268   85279 logs.go:276] 1 containers: [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8]
	I0819 12:00:12.848310   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.851670   85279 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 12:00:12.851730   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:00:12.884667   85279 cri.go:89] found id: "e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:12.884689   85279 cri.go:89] found id: ""
	I0819 12:00:12.884697   85279 logs.go:276] 1 containers: [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707]
	I0819 12:00:12.884747   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.887886   85279 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 12:00:12.887958   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:00:12.922235   85279 cri.go:89] found id: "1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:12.922255   85279 cri.go:89] found id: ""
	I0819 12:00:12.922264   85279 logs.go:276] 1 containers: [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f]
	I0819 12:00:12.922321   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.925705   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:00:12.925769   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:00:12.959098   85279 cri.go:89] found id: "7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:12.959118   85279 cri.go:89] found id: ""
	I0819 12:00:12.959125   85279 logs.go:276] 1 containers: [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773]
	I0819 12:00:12.959172   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:12.962537   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:00:12.962601   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:00:12.996596   85279 cri.go:89] found id: "7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:12.996622   85279 cri.go:89] found id: ""
	I0819 12:00:12.996632   85279 logs.go:276] 1 containers: [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70]
	I0819 12:00:12.996680   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.000166   85279 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:00:13.000227   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:00:13.032895   85279 cri.go:89] found id: "8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:13.032917   85279 cri.go:89] found id: ""
	I0819 12:00:13.032925   85279 logs.go:276] 1 containers: [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824]
	I0819 12:00:13.032982   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.036143   85279 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 12:00:13.036203   85279 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:00:13.068287   85279 cri.go:89] found id: "f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:13.068313   85279 cri.go:89] found id: ""
	I0819 12:00:13.068323   85279 logs.go:276] 1 containers: [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12]
	I0819 12:00:13.068386   85279 ssh_runner.go:195] Run: which crictl
	I0819 12:00:13.071651   85279 logs.go:123] Gathering logs for kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] ...
	I0819 12:00:13.071675   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12"
	I0819 12:00:13.111266   85279 logs.go:123] Gathering logs for CRI-O ...
	I0819 12:00:13.111309   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 12:00:13.183780   85279 logs.go:123] Gathering logs for kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] ...
	I0819 12:00:13.183819   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8"
	I0819 12:00:13.228385   85279 logs.go:123] Gathering logs for etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] ...
	I0819 12:00:13.228412   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707"
	I0819 12:00:13.269991   85279 logs.go:123] Gathering logs for coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] ...
	I0819 12:00:13.270020   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f"
	I0819 12:00:13.304693   85279 logs.go:123] Gathering logs for kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] ...
	I0819 12:00:13.304725   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773"
	I0819 12:00:13.343624   85279 logs.go:123] Gathering logs for kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] ...
	I0819 12:00:13.343660   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70"
	I0819 12:00:13.376484   85279 logs.go:123] Gathering logs for kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] ...
	I0819 12:00:13.376512   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824"
	I0819 12:00:13.432173   85279 logs.go:123] Gathering logs for container status ...
	I0819 12:00:13.432286   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:00:13.472949   85279 logs.go:123] Gathering logs for kubelet ...
	I0819 12:00:13.472977   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:00:13.494713   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:13.494893   85279 logs.go:138] Found kubelet problem: Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:13.543564   85279 logs.go:123] Gathering logs for dmesg ...
	I0819 12:00:13.543602   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:00:13.562749   85279 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:00:13.562780   85279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:00:13.658386   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:13.658410   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:00:13.658474   85279 out.go:270] X Problems detected in kubelet:
	W0819 12:00:13.658487   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: W0819 11:58:03.846746    1622 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-010148" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-010148' and this object
	W0819 12:00:13.658493   85279 out.go:270]   Aug 19 11:58:03 addons-010148 kubelet[1622]: E0819 11:58:03.846806    1622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-010148\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-010148' and this object" logger="UnhandledError"
	I0819 12:00:13.658501   85279 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:13.658506   85279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:23.669062   85279 system_pods.go:59] 19 kube-system pods found
	I0819 12:00:23.669125   85279 system_pods.go:61] "coredns-6f6b679f8f-7mkcm" [c6c9f0bb-626f-4b5c-addb-605b703dad1a] Running
	I0819 12:00:23.669140   85279 system_pods.go:61] "csi-hostpath-attacher-0" [d3bb6a3e-0662-420b-b481-2520d71bb56a] Running
	I0819 12:00:23.669145   85279 system_pods.go:61] "csi-hostpath-resizer-0" [325c3846-5ce5-492a-b33a-662b8e3786c1] Running
	I0819 12:00:23.669151   85279 system_pods.go:61] "csi-hostpathplugin-2s76k" [0d7cc92a-db70-4d11-b4f3-7c4990113f97] Running
	I0819 12:00:23.669158   85279 system_pods.go:61] "etcd-addons-010148" [4c3e9bff-8d94-4b44-9b21-9d4208060167] Running
	I0819 12:00:23.669164   85279 system_pods.go:61] "kindnet-cppjb" [367f146a-254f-4dc3-b429-a96edfbe5d80] Running
	I0819 12:00:23.669170   85279 system_pods.go:61] "kube-apiserver-addons-010148" [6775f09c-82f3-4484-966c-539cbf577402] Running
	I0819 12:00:23.669179   85279 system_pods.go:61] "kube-controller-manager-addons-010148" [822cea42-6184-4359-bba1-6b01a6745253] Running
	I0819 12:00:23.669195   85279 system_pods.go:61] "kube-ingress-dns-minikube" [cd2c0881-7db8-4d07-9af4-29b0e4c51dfb] Running
	I0819 12:00:23.669200   85279 system_pods.go:61] "kube-proxy-94dm9" [debbf67c-381d-45ff-942c-c66366a93408] Running
	I0819 12:00:23.669205   85279 system_pods.go:61] "kube-scheduler-addons-010148" [ed387e21-f76e-45aa-a736-b721b15f1913] Running
	I0819 12:00:23.669212   85279 system_pods.go:61] "metrics-server-8988944d9-phfcl" [82ed99b0-3ee4-42b7-9afc-f26a47b0d057] Running
	I0819 12:00:23.669220   85279 system_pods.go:61] "nvidia-device-plugin-daemonset-9gfqj" [780617de-6822-48b4-bc3f-20932c2c5681] Running
	I0819 12:00:23.669226   85279 system_pods.go:61] "registry-6fb4cdfc84-vzmzk" [f04fc68c-2fa9-46e6-a18d-49a1a8a81968] Running
	I0819 12:00:23.669235   85279 system_pods.go:61] "registry-proxy-zddbz" [59ab7eba-4de5-4dd0-b7df-ee19cd688277] Running
	I0819 12:00:23.669242   85279 system_pods.go:61] "snapshot-controller-56fcc65765-nm2ls" [ce8958b5-a572-45b2-9873-0162c21c0841] Running
	I0819 12:00:23.669250   85279 system_pods.go:61] "snapshot-controller-56fcc65765-wm5wz" [e3fcb584-3ae8-4204-a72e-c4eeae36b98a] Running
	I0819 12:00:23.669255   85279 system_pods.go:61] "storage-provisioner" [5915f065-bf02-4049-9370-4c383eeceabb] Running
	I0819 12:00:23.669261   85279 system_pods.go:61] "tiller-deploy-b48cc5f79-99f2d" [a79cfc7e-dad8-4740-8386-760769073d6b] Running
	I0819 12:00:23.669271   85279 system_pods.go:74] duration metric: took 10.855286346s to wait for pod list to return data ...
	I0819 12:00:23.669283   85279 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:00:23.672036   85279 default_sa.go:45] found service account: "default"
	I0819 12:00:23.672060   85279 default_sa.go:55] duration metric: took 2.768504ms for default service account to be created ...
	I0819 12:00:23.672069   85279 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:00:23.680136   85279 system_pods.go:86] 19 kube-system pods found
	I0819 12:00:23.680163   85279 system_pods.go:89] "coredns-6f6b679f8f-7mkcm" [c6c9f0bb-626f-4b5c-addb-605b703dad1a] Running
	I0819 12:00:23.680170   85279 system_pods.go:89] "csi-hostpath-attacher-0" [d3bb6a3e-0662-420b-b481-2520d71bb56a] Running
	I0819 12:00:23.680174   85279 system_pods.go:89] "csi-hostpath-resizer-0" [325c3846-5ce5-492a-b33a-662b8e3786c1] Running
	I0819 12:00:23.680177   85279 system_pods.go:89] "csi-hostpathplugin-2s76k" [0d7cc92a-db70-4d11-b4f3-7c4990113f97] Running
	I0819 12:00:23.680181   85279 system_pods.go:89] "etcd-addons-010148" [4c3e9bff-8d94-4b44-9b21-9d4208060167] Running
	I0819 12:00:23.680188   85279 system_pods.go:89] "kindnet-cppjb" [367f146a-254f-4dc3-b429-a96edfbe5d80] Running
	I0819 12:00:23.680191   85279 system_pods.go:89] "kube-apiserver-addons-010148" [6775f09c-82f3-4484-966c-539cbf577402] Running
	I0819 12:00:23.680195   85279 system_pods.go:89] "kube-controller-manager-addons-010148" [822cea42-6184-4359-bba1-6b01a6745253] Running
	I0819 12:00:23.680200   85279 system_pods.go:89] "kube-ingress-dns-minikube" [cd2c0881-7db8-4d07-9af4-29b0e4c51dfb] Running
	I0819 12:00:23.680203   85279 system_pods.go:89] "kube-proxy-94dm9" [debbf67c-381d-45ff-942c-c66366a93408] Running
	I0819 12:00:23.680206   85279 system_pods.go:89] "kube-scheduler-addons-010148" [ed387e21-f76e-45aa-a736-b721b15f1913] Running
	I0819 12:00:23.680210   85279 system_pods.go:89] "metrics-server-8988944d9-phfcl" [82ed99b0-3ee4-42b7-9afc-f26a47b0d057] Running
	I0819 12:00:23.680213   85279 system_pods.go:89] "nvidia-device-plugin-daemonset-9gfqj" [780617de-6822-48b4-bc3f-20932c2c5681] Running
	I0819 12:00:23.680217   85279 system_pods.go:89] "registry-6fb4cdfc84-vzmzk" [f04fc68c-2fa9-46e6-a18d-49a1a8a81968] Running
	I0819 12:00:23.680219   85279 system_pods.go:89] "registry-proxy-zddbz" [59ab7eba-4de5-4dd0-b7df-ee19cd688277] Running
	I0819 12:00:23.680223   85279 system_pods.go:89] "snapshot-controller-56fcc65765-nm2ls" [ce8958b5-a572-45b2-9873-0162c21c0841] Running
	I0819 12:00:23.680226   85279 system_pods.go:89] "snapshot-controller-56fcc65765-wm5wz" [e3fcb584-3ae8-4204-a72e-c4eeae36b98a] Running
	I0819 12:00:23.680228   85279 system_pods.go:89] "storage-provisioner" [5915f065-bf02-4049-9370-4c383eeceabb] Running
	I0819 12:00:23.680231   85279 system_pods.go:89] "tiller-deploy-b48cc5f79-99f2d" [a79cfc7e-dad8-4740-8386-760769073d6b] Running
	I0819 12:00:23.680238   85279 system_pods.go:126] duration metric: took 8.162576ms to wait for k8s-apps to be running ...
	I0819 12:00:23.680247   85279 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:00:23.680291   85279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:00:23.691554   85279 system_svc.go:56] duration metric: took 11.29787ms WaitForService to wait for kubelet
	I0819 12:00:23.691585   85279 kubeadm.go:582] duration metric: took 2m20.167153014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:00:23.691605   85279 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:00:23.694403   85279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 12:00:23.694429   85279 node_conditions.go:123] node cpu capacity is 8
	I0819 12:00:23.694445   85279 node_conditions.go:105] duration metric: took 2.834599ms to run NodePressure ...
	I0819 12:00:23.694459   85279 start.go:241] waiting for startup goroutines ...
	I0819 12:00:23.694469   85279 start.go:246] waiting for cluster config update ...
	I0819 12:00:23.694491   85279 start.go:255] writing updated cluster config ...
	I0819 12:00:23.694772   85279 ssh_runner.go:195] Run: rm -f paused
	I0819 12:00:23.742573   85279 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 12:00:23.744897   85279 out.go:177] * Done! kubectl is now configured to use "addons-010148" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.418057867Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-4vzf7 from CNI network \"kindnet\" (type=ptp)"
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.447386182Z" level=info msg="Stopped pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=667cde57-5d51-44df-82d6-072d41d2817f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.667511318Z" level=info msg="Removing container: 7b2e93cd36916ab98c9b24200e23344ef022e118e64da34ed02eb8a2d6dea3d2" id=a512aca7-5832-4244-b70f-9e72db71a297 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.680361242Z" level=info msg="Removed container 7b2e93cd36916ab98c9b24200e23344ef022e118e64da34ed02eb8a2d6dea3d2: ingress-nginx/ingress-nginx-admission-create-r6r8n/create" id=a512aca7-5832-4244-b70f-9e72db71a297 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.681563869Z" level=info msg="Removing container: 2ce00e9767deb079c54c8ae563bc35c8ecc48082e9c93bb2be9a4664f4b91087" id=ffb96613-106e-4d15-bc52-8b7b50d3a602 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.695085066Z" level=info msg="Removed container 2ce00e9767deb079c54c8ae563bc35c8ecc48082e9c93bb2be9a4664f4b91087: ingress-nginx/ingress-nginx-controller-bc57996ff-4vzf7/controller" id=ffb96613-106e-4d15-bc52-8b7b50d3a602 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.696376342Z" level=info msg="Removing container: 0128c03f68235eaf634d3cd838682f3f4b800669a1efbd4fbe48c647d0880309" id=531ad93a-187b-4857-8428-107f6751c103 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.709624808Z" level=info msg="Removed container 0128c03f68235eaf634d3cd838682f3f4b800669a1efbd4fbe48c647d0880309: ingress-nginx/ingress-nginx-admission-patch-dngcz/patch" id=531ad93a-187b-4857-8428-107f6751c103 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713319270Z" level=info msg="Stopping pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=bfdf6c63-1352-4c83-8ba5-01192a1a9370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713375338Z" level=info msg="Stopped pod sandbox (already stopped): 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=bfdf6c63-1352-4c83-8ba5-01192a1a9370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.713674752Z" level=info msg="Removing pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=a01d67da-a9d8-4a34-86c3-2cf2dd80f2c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.719599731Z" level=info msg="Removed pod sandbox: 6c2abe16fa9af975374813861d57400843aeeb9c7b6e30921676e02f7e9c4a83" id=a01d67da-a9d8-4a34-86c3-2cf2dd80f2c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720161839Z" level=info msg="Stopping pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=f9c3d094-3f07-45a7-980f-89158dbe10c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720194333Z" level=info msg="Stopped pod sandbox (already stopped): 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=f9c3d094-3f07-45a7-980f-89158dbe10c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.720575756Z" level=info msg="Removing pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=61d3b581-6215-4d64-b71d-fe02ac96aa5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727010770Z" level=info msg="Removed pod sandbox: 811a8961a31eacd56dd6176ae84df8eabc2919a9b1ac357164536fe77d350e31" id=61d3b581-6215-4d64-b71d-fe02ac96aa5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727490041Z" level=info msg="Stopping pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=bc9dd9d7-23ec-4fb1-ad27-78130cd63c6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727529266Z" level=info msg="Stopped pod sandbox (already stopped): 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=bc9dd9d7-23ec-4fb1-ad27-78130cd63c6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.727825738Z" level=info msg="Removing pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=d08bccd6-38f9-458f-aac5-1e766306a545 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733359387Z" level=info msg="Removed pod sandbox: 95e49faad39048ef2f0585b5599c506727199677a2bc3ef68e4989d5437008b7" id=d08bccd6-38f9-458f-aac5-1e766306a545 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733788329Z" level=info msg="Stopping pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=5487bab7-ba72-449e-909a-dcf52e833488 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.733823762Z" level=info msg="Stopped pod sandbox (already stopped): d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=5487bab7-ba72-449e-909a-dcf52e833488 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.734141636Z" level=info msg="Removing pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=e87d7e1f-0b63-4253-89cd-98fc4fe148fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:03:58 addons-010148 crio[1028]: time="2024-08-19 12:03:58.740399266Z" level=info msg="Removed pod sandbox: d3e672d13bf8c449aed42e0556264bf7b567e898847170fa6db9e0c0aa3818fd" id=e87d7e1f-0b63-4253-89cd-98fc4fe148fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 12:05:59 addons-010148 crio[1028]: time="2024-08-19 12:05:59.242802722Z" level=info msg="Stopping container: 055544534d3c6fe2e0ef1454b2eb8b1e934cee9ed7924d9e32845e7c48a82f96 (timeout: 30s)" id=8bed9556-290b-4433-9025-7e9384d0e110 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29b5139aeabec       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   1adff60419d8d       hello-world-app-55bf9c44b4-qjzs2
	c550b65af1c54       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   faa36f40b0de4       nginx
	cf4c86f2ad830       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   cff6eb25eb7b9       busybox
	055544534d3c6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   16438c4c6da49       metrics-server-8988944d9-phfcl
	1ba40d35141d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   3925d9b1ded78       coredns-6f6b679f8f-7mkcm
	593cfb1da27e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   f0aa565db5dce       storage-provisioner
	f31388602abfe       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      7 minutes ago       Running             kindnet-cni               0                   f899e31c46b0c       kindnet-cppjb
	7576faede8f13       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   b3dfdd2e1d8d3       kube-proxy-94dm9
	7fbf8e09fb2a0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   052e31579ba74       kube-scheduler-addons-010148
	e17cd1075970d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   833d2f75bc1ff       etcd-addons-010148
	8061bb277832a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   d3c6a9c216488       kube-controller-manager-addons-010148
	36d77af4416f3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   c7613a422ae4b       kube-apiserver-addons-010148
	
	
	==> coredns [1ba40d35141d7f28ec8d67877f5f2cc2cb772315a96c8bdbb483de0aa075499f] <==
	[INFO] 10.244.0.19:42174 - 33091 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009759s
	[INFO] 10.244.0.19:47604 - 29219 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00397029s
	[INFO] 10.244.0.19:47604 - 7713 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004036227s
	[INFO] 10.244.0.19:44346 - 3410 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003906896s
	[INFO] 10.244.0.19:44346 - 43095 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003974597s
	[INFO] 10.244.0.19:53911 - 15066 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003823351s
	[INFO] 10.244.0.19:53911 - 30681 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.008608898s
	[INFO] 10.244.0.19:40577 - 36299 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006749s
	[INFO] 10.244.0.19:40577 - 63182 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073739s
	[INFO] 10.244.0.20:36919 - 25940 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193953s
	[INFO] 10.244.0.20:39864 - 39507 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001858s
	[INFO] 10.244.0.20:45599 - 42134 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133358s
	[INFO] 10.244.0.20:53451 - 55020 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011509s
	[INFO] 10.244.0.20:34568 - 33422 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121545s
	[INFO] 10.244.0.20:46878 - 7425 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137153s
	[INFO] 10.244.0.20:36307 - 17801 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004680388s
	[INFO] 10.244.0.20:40391 - 55865 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004741399s
	[INFO] 10.244.0.20:57021 - 22476 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004192647s
	[INFO] 10.244.0.20:46334 - 5438 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00466871s
	[INFO] 10.244.0.20:39512 - 37463 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004172581s
	[INFO] 10.244.0.20:38926 - 44539 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004328609s
	[INFO] 10.244.0.20:60058 - 9234 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000758534s
	[INFO] 10.244.0.20:46975 - 12242 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000828275s
	[INFO] 10.244.0.23:59481 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00017755s
	[INFO] 10.244.0.23:50784 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173374s
	
	
	==> describe nodes <==
	Name:               addons-010148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-010148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=addons-010148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_57_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-010148
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:57:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-010148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:04:05 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:04:05 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:04:05 +0000   Mon, 19 Aug 2024 11:57:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:04:05 +0000   Mon, 19 Aug 2024 11:58:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-010148
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 5536b84bdec745ff98ea72a7ce81abf4
	  System UUID:                a7c6b126-5a64-4229-83f1-4ce38b7718a7
	  Boot ID:                    27d0ea76-89fe-494c-b831-ffe5c08f219c
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  default                     hello-world-app-55bf9c44b4-qjzs2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-6f6b679f8f-7mkcm                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m57s
	  kube-system                 etcd-addons-010148                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m2s
	  kube-system                 kindnet-cppjb                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m57s
	  kube-system                 kube-apiserver-addons-010148             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m2s
	  kube-system                 kube-controller-manager-addons-010148    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m2s
	  kube-system                 kube-proxy-94dm9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-scheduler-addons-010148             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 metrics-server-8988944d9-phfcl           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m52s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m51s  kube-proxy       
	  Normal   Starting                 8m2s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m2s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m2s   kubelet          Node addons-010148 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m2s   kubelet          Node addons-010148 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m2s   kubelet          Node addons-010148 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m58s  node-controller  Node addons-010148 event: Registered Node addons-010148 in Controller
	  Normal   NodeReady                7m38s  kubelet          Node addons-010148 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.001189] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000003] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.507412] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000006] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.000444] IPv4: martian source 192.168.122.1 from 10.244.0.2, on dev virbr0
	[  +0.000001] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +1.500650] IPv4: martian source 192.168.122.1 from 10.244.0.4, on dev virbr0
	[  +0.000006] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[  +0.001146] IPv4: martian source 192.168.122.1 from 10.244.0.2, on dev virbr0
	[  +0.000003] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 6c b7 6b 08 00
	[Aug19 12:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +1.031417] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +2.015773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +4.191588] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[  +8.191150] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[Aug19 12:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	[ +33.788481] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a 5d 12 8b 78 0f de 05 b6 37 00 11 08 00
	
	
	==> etcd [e17cd1075970dd8b734c02644a4b292bf0526aa5bf9947e2c87dd79e944a7707] <==
	{"level":"info","ts":"2024-08-19T11:57:54.353584Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:57:54.353689Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:57:54.354544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:57:54.354706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-19T11:58:05.456954Z","caller":"traceutil/trace.go:171","msg":"trace[525934890] linearizableReadLoop","detail":"{readStateIndex:365; appliedIndex:364; }","duration":"104.54643ms","start":"2024-08-19T11:58:05.352386Z","end":"2024-08-19T11:58:05.456933Z","steps":["trace[525934890] 'read index received'  (duration: 100.346744ms)","trace[525934890] 'applied index is now lower than readState.Index'  (duration: 4.198704ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:58:05.457119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.700886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T11:58:05.457170Z","caller":"traceutil/trace.go:171","msg":"trace[1781395707] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:353; }","duration":"104.779283ms","start":"2024-08-19T11:58:05.352381Z","end":"2024-08-19T11:58:05.457160Z","steps":["trace[1781395707] 'agreement among raft nodes before linearized reading'  (duration: 104.648098ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:05.457399Z","caller":"traceutil/trace.go:171","msg":"trace[434263034] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"210.662075ms","start":"2024-08-19T11:58:05.246728Z","end":"2024-08-19T11:58:05.457390Z","steps":["trace[434263034] 'process raft request'  (duration: 207.488696ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.146316Z","caller":"traceutil/trace.go:171","msg":"trace[1129713747] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"100.216897ms","start":"2024-08-19T11:58:06.046081Z","end":"2024-08-19T11:58:06.146298Z","steps":["trace[1129713747] 'process raft request'  (duration: 99.878221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:06.150087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.99354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:06.159079Z","caller":"traceutil/trace.go:171","msg":"trace[37306806] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:370; }","duration":"111.99514ms","start":"2024-08-19T11:58:06.047067Z","end":"2024-08-19T11:58:06.159062Z","steps":["trace[37306806] 'agreement among raft nodes before linearized reading'  (duration: 102.979958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.150131Z","caller":"traceutil/trace.go:171","msg":"trace[1302662073] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"103.80255ms","start":"2024-08-19T11:58:06.046314Z","end":"2024-08-19T11:58:06.150117Z","steps":["trace[1302662073] 'process raft request'  (duration: 99.720069ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:06.150230Z","caller":"traceutil/trace.go:171","msg":"trace[1694963596] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"103.670164ms","start":"2024-08-19T11:58:06.046551Z","end":"2024-08-19T11:58:06.150221Z","steps":["trace[1694963596] 'process raft request'  (duration: 99.517575ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.242822Z","caller":"traceutil/trace.go:171","msg":"trace[2014269219] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"182.940392ms","start":"2024-08-19T11:58:07.059849Z","end":"2024-08-19T11:58:07.242789Z","steps":["trace[2014269219] 'process raft request'  (duration: 100.1189ms)","trace[2014269219] 'compare'  (duration: 82.261843ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:58:07.244685Z","caller":"traceutil/trace.go:171","msg":"trace[120059853] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"184.708768ms","start":"2024-08-19T11:58:07.059940Z","end":"2024-08-19T11:58:07.244648Z","steps":["trace[120059853] 'read index received'  (duration: 87.547766ms)","trace[120059853] 'applied index is now lower than readState.Index'  (duration: 97.160117ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:58:07.245277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.320612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-addons-010148\" ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2024-08-19T11:58:07.245318Z","caller":"traceutil/trace.go:171","msg":"trace[519150204] range","detail":"{range_begin:/registry/pods/kube-system/etcd-addons-010148; range_end:; response_count:1; response_revision:398; }","duration":"185.372377ms","start":"2024-08-19T11:58:07.059937Z","end":"2024-08-19T11:58:07.245310Z","steps":["trace[519150204] 'agreement among raft nodes before linearized reading'  (duration: 185.297954ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.765989Z","caller":"traceutil/trace.go:171","msg":"trace[154973813] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"107.841159ms","start":"2024-08-19T11:58:07.658137Z","end":"2024-08-19T11:58:07.765978Z","steps":["trace[154973813] 'process raft request'  (duration: 107.753325ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845270Z","caller":"traceutil/trace.go:171","msg":"trace[210933175] linearizableReadLoop","detail":"{readStateIndex:450; appliedIndex:446; }","duration":"182.528486ms","start":"2024-08-19T11:58:07.662725Z","end":"2024-08-19T11:58:07.845254Z","steps":["trace[210933175] 'read index received'  (duration: 179.221846ms)","trace[210933175] 'applied index is now lower than readState.Index'  (duration: 3.305879ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:58:07.845516Z","caller":"traceutil/trace.go:171","msg":"trace[75732263] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"184.985681ms","start":"2024-08-19T11:58:07.660502Z","end":"2024-08-19T11:58:07.845488Z","steps":["trace[75732263] 'process raft request'  (duration: 184.476453ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845755Z","caller":"traceutil/trace.go:171","msg":"trace[1602890120] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"183.39727ms","start":"2024-08-19T11:58:07.662347Z","end":"2024-08-19T11:58:07.845745Z","steps":["trace[1602890120] 'process raft request'  (duration: 182.734198ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.845985Z","caller":"traceutil/trace.go:171","msg":"trace[1254138284] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"183.538299ms","start":"2024-08-19T11:58:07.662434Z","end":"2024-08-19T11:58:07.845972Z","steps":["trace[1254138284] 'process raft request'  (duration: 182.690122ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:58:07.846157Z","caller":"traceutil/trace.go:171","msg":"trace[1200692081] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"183.54906ms","start":"2024-08-19T11:58:07.662598Z","end":"2024-08-19T11:58:07.846147Z","steps":["trace[1200692081] 'process raft request'  (duration: 182.554432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:07.846601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.860566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:07.846632Z","caller":"traceutil/trace.go:171","msg":"trace[571859508] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:438; }","duration":"183.901849ms","start":"2024-08-19T11:58:07.662722Z","end":"2024-08-19T11:58:07.846624Z","steps":["trace[571859508] 'agreement among raft nodes before linearized reading'  (duration: 183.841931ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:06:00 up  1:47,  0 users,  load average: 0.16, 0.70, 1.32
	Linux addons-010148 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f31388602abfee12cd9da8716a9bfc73defa40f23787afd89f601672911eca12] <==
	I0819 12:04:42.443271       1 main.go:299] handling current node
	W0819 12:04:52.058078       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:04:52.058108       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:04:52.443198       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:04:52.443236       1 main.go:299] handling current node
	I0819 12:05:02.443585       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:02.443634       1 main.go:299] handling current node
	W0819 12:05:03.677076       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:05:03.677113       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 12:05:12.443486       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:12.443534       1 main.go:299] handling current node
	I0819 12:05:22.443153       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:22.443193       1 main.go:299] handling current node
	I0819 12:05:32.443093       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:32.443126       1 main.go:299] handling current node
	W0819 12:05:36.888082       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:05:36.888118       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 12:05:37.183294       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 12:05:37.183326       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 12:05:42.443299       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:42.443343       1 main.go:299] handling current node
	W0819 12:05:50.552546       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:05:50.552581       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 12:05:52.443375       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 12:05:52.443420       1 main.go:299] handling current node
	
	
	==> kube-apiserver [36d77af4416f318c7dd2e515b53d2f8e8f6a7aff8c1a76effc3dd34ac817ffa8] <==
	E0819 11:59:50.487844       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0819 12:00:34.170733       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34700: use of closed network connection
	E0819 12:00:34.329454       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34724: use of closed network connection
	I0819 12:00:49.028755       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 12:00:50.045103       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 12:01:08.938320       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 12:01:11.642736       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.123.28"}
	I0819 12:01:29.603327       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 12:01:29.956443       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.202.194"}
	I0819 12:01:36.171830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.171995       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.246572       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.246650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.252809       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.252939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.259396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.259533       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:01:36.269173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:01:36.269310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 12:01:37.253483       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 12:01:37.269267       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 12:01:37.279122       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0819 12:01:38.816376       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 12:01:45.213091       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.31:52722: read: connection reset by peer
	I0819 12:03:53.450347       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.89.6"}
	
	
	==> kube-controller-manager [8061bb277832a86953e5674d2e2e5818133dadf970b95c826e4a6d669caf8824] <==
	W0819 12:04:06.325952       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:06.326005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:06.833420       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:06.833468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:07.403749       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:07.403804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:41.226445       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:41.226486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:46.125270       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:46.125316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:49.670154       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:49.670197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:51.291212       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:51.291258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:05:26.794787       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:05:26.794831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:05:27.260872       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:05:27.260913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:05:35.305278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:05:35.305327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:05:50.735140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:05:50.735195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 12:05:59.232522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.901µs"
	W0819 12:05:59.789399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:05:59.789440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7576faede8f131e25b57c7b4f57de41151a944c8b2543a6464d7dcb00ba56c70] <==
	I0819 11:58:07.156787       1 server_linux.go:66] "Using iptables proxy"
	I0819 11:58:08.048066       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 11:58:08.052375       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:58:08.650820       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 11:58:08.650978       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:58:08.654724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:58:08.655576       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:58:08.656147       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:58:08.657731       1 config.go:197] "Starting service config controller"
	I0819 11:58:08.659399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:58:08.659091       1 config.go:326] "Starting node config controller"
	I0819 11:58:08.659524       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:58:08.658449       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:58:08.659542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:58:08.760197       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:58:08.760265       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:58:08.842496       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fbf8e09fb2a038cef54df33300676b83e9136e5a8678f01941fa74163e76773] <==
	W0819 11:57:55.954627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0819 11:57:55.954680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0819 11:57:55.954716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:57:55.954722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 11:57:55.954753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 11:57:55.954768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0819 11:57:55.954694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:57:55.955045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:55.955078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:55.955101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:55.954953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:57:55.955125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.759311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:57:56.759352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.819989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:56.820039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:56.868798       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:57:56.868837       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 11:57:56.870621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:57:56.870661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 11:57:58.651702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:04:38 addons-010148 kubelet[1622]: E0819 12:04:38.629755    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069078629529492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:38 addons-010148 kubelet[1622]: E0819 12:04:38.629787    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069078629529492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:48 addons-010148 kubelet[1622]: E0819 12:04:48.632105    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069088631865421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:48 addons-010148 kubelet[1622]: E0819 12:04:48.632140    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069088631865421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:58 addons-010148 kubelet[1622]: E0819 12:04:58.634958    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069098634676298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:58 addons-010148 kubelet[1622]: E0819 12:04:58.634996    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069098634676298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:08 addons-010148 kubelet[1622]: E0819 12:05:08.638353    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069108638118736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:08 addons-010148 kubelet[1622]: E0819 12:05:08.638389    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069108638118736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:18 addons-010148 kubelet[1622]: E0819 12:05:18.641919    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069118641609088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:18 addons-010148 kubelet[1622]: E0819 12:05:18.641952    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069118641609088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:25 addons-010148 kubelet[1622]: I0819 12:05:25.446328    1622 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 12:05:28 addons-010148 kubelet[1622]: E0819 12:05:28.644220    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069128643957624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:28 addons-010148 kubelet[1622]: E0819 12:05:28.644255    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069128643957624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:38 addons-010148 kubelet[1622]: E0819 12:05:38.647044    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069138646781343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:38 addons-010148 kubelet[1622]: E0819 12:05:38.647089    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069138646781343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:48 addons-010148 kubelet[1622]: E0819 12:05:48.649915    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069148649676706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:48 addons-010148 kubelet[1622]: E0819 12:05:48.649973    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069148649676706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:58 addons-010148 kubelet[1622]: E0819 12:05:58.652825    1622 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069158652560141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:58 addons-010148 kubelet[1622]: E0819 12:05:58.652866    1622 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069158652560141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.571153    1622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-tmp-dir\") pod \"82ed99b0-3ee4-42b7-9afc-f26a47b0d057\" (UID: \"82ed99b0-3ee4-42b7-9afc-f26a47b0d057\") "
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.571223    1622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tf5\" (UniqueName: \"kubernetes.io/projected/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-kube-api-access-x9tf5\") pod \"82ed99b0-3ee4-42b7-9afc-f26a47b0d057\" (UID: \"82ed99b0-3ee4-42b7-9afc-f26a47b0d057\") "
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.571517    1622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "82ed99b0-3ee4-42b7-9afc-f26a47b0d057" (UID: "82ed99b0-3ee4-42b7-9afc-f26a47b0d057"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.572904    1622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-kube-api-access-x9tf5" (OuterVolumeSpecName: "kube-api-access-x9tf5") pod "82ed99b0-3ee4-42b7-9afc-f26a47b0d057" (UID: "82ed99b0-3ee4-42b7-9afc-f26a47b0d057"). InnerVolumeSpecName "kube-api-access-x9tf5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.672219    1622 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x9tf5\" (UniqueName: \"kubernetes.io/projected/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-kube-api-access-x9tf5\") on node \"addons-010148\" DevicePath \"\""
	Aug 19 12:06:00 addons-010148 kubelet[1622]: I0819 12:06:00.672266    1622 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82ed99b0-3ee4-42b7-9afc-f26a47b0d057-tmp-dir\") on node \"addons-010148\" DevicePath \"\""
	
	
	==> storage-provisioner [593cfb1da27e071e2e5d1783f8bbdf03aefb01cf8585c94a1960b24d26abc516] <==
	I0819 11:58:23.675626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:58:23.683082       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:58:23.683140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:58:23.691567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:58:23.691612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c89b6e88-c331-4e5a-b646-bf95c466c783", APIVersion:"v1", ResourceVersion:"906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5 became leader
	I0819 11:58:23.691765       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5!
	I0819 11:58:23.791911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-010148_9b0968d4-5bc0-49cf-b888-e7a17e02efa5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-010148 -n addons-010148
helpers_test.go:261: (dbg) Run:  kubectl --context addons-010148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (318.62s)

                                                
                                    

Test pass (301/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.1
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.06
21 TestBinaryMirror 0.72
22 TestOffline 52.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 182.66
31 TestAddons/serial/GCPAuth/Namespaces 0.13
33 TestAddons/parallel/Registry 16.4
35 TestAddons/parallel/InspektorGadget 11.65
37 TestAddons/parallel/HelmTiller 10.52
39 TestAddons/parallel/CSI 53.9
40 TestAddons/parallel/Headlamp 18.37
41 TestAddons/parallel/CloudSpanner 5.52
42 TestAddons/parallel/LocalPath 60.9
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 10.85
45 TestAddons/StoppedEnableDisable 12.03
46 TestCertOptions 31
47 TestCertExpiration 220.66
49 TestForceSystemdFlag 26.29
50 TestForceSystemdEnv 29.94
52 TestKVMDriverInstallOrUpdate 4.6
56 TestErrorSpam/setup 23.01
57 TestErrorSpam/start 0.56
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.45
60 TestErrorSpam/unpause 1.64
61 TestErrorSpam/stop 1.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 41.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.85
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.88
73 TestFunctional/serial/CacheCmd/cache/add_local 2.1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 35.98
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.31
85 TestFunctional/serial/InvalidService 4.33
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 20.11
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.88
95 TestFunctional/parallel/ServiceCmdConnect 12.5
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 38.81
99 TestFunctional/parallel/SSHCmd 0.63
100 TestFunctional/parallel/CpCmd 1.81
101 TestFunctional/parallel/MySQL 24.24
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.59
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
111 TestFunctional/parallel/License 0.59
112 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.25
118 TestFunctional/parallel/ServiceCmd/List 0.47
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
121 TestFunctional/parallel/ServiceCmd/Format 0.5
122 TestFunctional/parallel/ServiceCmd/URL 0.45
123 TestFunctional/parallel/Version/short 0.05
124 TestFunctional/parallel/Version/components 0.89
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.5
130 TestFunctional/parallel/ImageCommands/Setup 2.03
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
138 TestFunctional/parallel/ProfileCmd/profile_list 0.36
139 TestFunctional/parallel/MountCmd/any-port 7.81
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.76
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
151 TestFunctional/parallel/MountCmd/specific-port 2.35
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.04
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 107.99
160 TestMultiControlPlane/serial/DeployApp 5.61
161 TestMultiControlPlane/serial/PingHostFromPods 1.02
162 TestMultiControlPlane/serial/AddWorkerNode 36.94
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
165 TestMultiControlPlane/serial/CopyFile 15.07
166 TestMultiControlPlane/serial/StopSecondaryNode 12.42
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.73
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 7.9
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 194.18
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
173 TestMultiControlPlane/serial/StopCluster 35.53
174 TestMultiControlPlane/serial/RestartCluster 112.41
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.46
176 TestMultiControlPlane/serial/AddSecondaryNode 38.82
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
181 TestJSONOutput/start/Command 41.72
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.57
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.71
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
206 TestKicCustomNetwork/create_custom_network 38.1
207 TestKicCustomNetwork/use_default_bridge_network 24.88
208 TestKicExistingNetwork 22.33
209 TestKicCustomSubnet 23.41
210 TestKicStaticIP 22.72
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 49.82
215 TestMountStart/serial/StartWithMountFirst 5.47
216 TestMountStart/serial/VerifyMountFirst 0.23
217 TestMountStart/serial/StartWithMountSecond 8.45
218 TestMountStart/serial/VerifyMountSecond 0.23
219 TestMountStart/serial/DeleteFirst 1.58
220 TestMountStart/serial/VerifyMountPostDelete 0.24
221 TestMountStart/serial/Stop 1.17
222 TestMountStart/serial/RestartStopped 7.42
223 TestMountStart/serial/VerifyMountPostStop 0.23
226 TestMultiNode/serial/FreshStart2Nodes 68.11
227 TestMultiNode/serial/DeployApp2Nodes 5.02
228 TestMultiNode/serial/PingHostFrom2Pods 0.69
229 TestMultiNode/serial/AddNode 28.09
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.28
232 TestMultiNode/serial/CopyFile 8.71
233 TestMultiNode/serial/StopNode 2.04
234 TestMultiNode/serial/StartAfterStop 8.71
235 TestMultiNode/serial/RestartKeepsNodes 102.43
236 TestMultiNode/serial/DeleteNode 5.17
237 TestMultiNode/serial/StopMultiNode 23.64
238 TestMultiNode/serial/RestartMultiNode 47.77
239 TestMultiNode/serial/ValidateNameConflict 26.41
244 TestPreload 193.67
246 TestScheduledStopUnix 100.07
249 TestInsufficientStorage 9.71
250 TestRunningBinaryUpgrade 98.75
252 TestKubernetesUpgrade 353.69
253 TestMissingContainerUpgrade 109.68
255 TestStoppedBinaryUpgrade/Setup 2.58
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestPause/serial/Start 52.78
266 TestNoKubernetes/serial/StartWithK8s 29.02
267 TestStoppedBinaryUpgrade/Upgrade 126.29
268 TestNoKubernetes/serial/StartWithStopK8s 7.82
269 TestNoKubernetes/serial/Start 7.76
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
271 TestNoKubernetes/serial/ProfileList 1.15
272 TestNoKubernetes/serial/Stop 1.18
273 TestNoKubernetes/serial/StartNoArgs 7.22
274 TestPause/serial/SecondStartNoReconfiguration 31.85
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
283 TestNetworkPlugins/group/false 3.18
287 TestPause/serial/Pause 0.7
288 TestPause/serial/VerifyStatus 0.32
289 TestPause/serial/Unpause 0.66
290 TestPause/serial/PauseAgain 0.92
291 TestPause/serial/DeletePaused 3.95
292 TestPause/serial/VerifyDeletedResources 4.87
293 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
295 TestStartStop/group/old-k8s-version/serial/FirstStart 126.2
297 TestStartStop/group/no-preload/serial/FirstStart 59.46
298 TestStartStop/group/no-preload/serial/DeployApp 10.24
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
300 TestStartStop/group/no-preload/serial/Stop 11.82
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
302 TestStartStop/group/no-preload/serial/SecondStart 262
303 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
305 TestStartStop/group/old-k8s-version/serial/Stop 12.37
307 TestStartStop/group/embed-certs/serial/FirstStart 44.12
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/old-k8s-version/serial/SecondStart 143.28
310 TestStartStop/group/embed-certs/serial/DeployApp 10.28
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
312 TestStartStop/group/embed-certs/serial/Stop 12.08
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
314 TestStartStop/group/embed-certs/serial/SecondStart 276
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.5
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.24
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.79
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.82
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
323 TestStartStop/group/old-k8s-version/serial/Pause 2.47
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.12
327 TestStartStop/group/newest-cni/serial/FirstStart 26.02
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
330 TestStartStop/group/newest-cni/serial/Stop 1.86
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
332 TestStartStop/group/newest-cni/serial/SecondStart 12.93
333 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
337 TestStartStop/group/newest-cni/serial/Pause 2.89
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
339 TestNetworkPlugins/group/auto/Start 48.56
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
341 TestStartStop/group/no-preload/serial/Pause 3.25
342 TestNetworkPlugins/group/kindnet/Start 45.78
343 TestNetworkPlugins/group/auto/KubeletFlags 0.25
344 TestNetworkPlugins/group/auto/NetCatPod 8.18
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/auto/DNS 0.12
347 TestNetworkPlugins/group/auto/Localhost 0.1
348 TestNetworkPlugins/group/auto/HairPin 0.11
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
350 TestNetworkPlugins/group/kindnet/NetCatPod 8.17
351 TestNetworkPlugins/group/kindnet/DNS 0.13
352 TestNetworkPlugins/group/kindnet/Localhost 0.12
353 TestNetworkPlugins/group/kindnet/HairPin 0.11
354 TestNetworkPlugins/group/calico/Start 59.19
355 TestNetworkPlugins/group/custom-flannel/Start 53.39
356 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
359 TestNetworkPlugins/group/calico/KubeletFlags 0.25
360 TestNetworkPlugins/group/calico/NetCatPod 10.18
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
364 TestStartStop/group/embed-certs/serial/Pause 2.98
365 TestNetworkPlugins/group/enable-default-cni/Start 34.86
366 TestNetworkPlugins/group/calico/DNS 0.14
367 TestNetworkPlugins/group/calico/Localhost 0.13
368 TestNetworkPlugins/group/calico/HairPin 0.11
369 TestNetworkPlugins/group/custom-flannel/DNS 0.12
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
372 TestNetworkPlugins/group/flannel/Start 50.54
373 TestNetworkPlugins/group/bridge/Start 67.78
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
381 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
382 TestNetworkPlugins/group/flannel/NetCatPod 8.17
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
384 TestNetworkPlugins/group/flannel/DNS 0.12
385 TestNetworkPlugins/group/flannel/Localhost 0.1
386 TestNetworkPlugins/group/flannel/HairPin 0.1
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
390 TestNetworkPlugins/group/bridge/NetCatPod 9.25
391 TestNetworkPlugins/group/bridge/DNS 0.13
392 TestNetworkPlugins/group/bridge/Localhost 0.11
393 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (16.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-501629 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-501629 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.217914407s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-501629
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-501629: exit status 85 (55.273744ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-501629 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |          |
	|         | -p download-only-501629        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:56:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:56:48.992903   83926 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:48.993189   83926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:48.993200   83926 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:48.993205   83926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:48.993467   83926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	W0819 11:56:48.993630   83926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19479-77145/.minikube/config/config.json: open /home/jenkins/minikube-integration/19479-77145/.minikube/config/config.json: no such file or directory
	I0819 11:56:48.994284   83926 out.go:352] Setting JSON to true
	I0819 11:56:48.995210   83926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5904,"bootTime":1724062705,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:56:48.995271   83926 start.go:139] virtualization: kvm guest
	I0819 11:56:48.997426   83926 out.go:97] [download-only-501629] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 11:56:48.997540   83926 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:56:48.997577   83926 notify.go:220] Checking for updates...
	I0819 11:56:48.998805   83926 out.go:169] MINIKUBE_LOCATION=19479
	I0819 11:56:49.000171   83926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:49.001440   83926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:56:49.002629   83926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 11:56:49.003698   83926 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 11:56:49.005818   83926 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:56:49.006112   83926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:56:49.027714   83926 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:56:49.027834   83926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:56:49.369875   83926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 11:56:49.360772378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:56:49.370015   83926 docker.go:307] overlay module found
	I0819 11:56:49.371597   83926 out.go:97] Using the docker driver based on user configuration
	I0819 11:56:49.371621   83926 start.go:297] selected driver: docker
	I0819 11:56:49.371634   83926 start.go:901] validating driver "docker" against <nil>
	I0819 11:56:49.371719   83926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:56:49.418715   83926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 11:56:49.41050837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:56:49.418898   83926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:49.419428   83926 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 11:56:49.419557   83926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:56:49.421108   83926 out.go:169] Using Docker driver with root privileges
	I0819 11:56:49.422328   83926 cni.go:84] Creating CNI manager for ""
	I0819 11:56:49.422357   83926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:56:49.422368   83926 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:49.422453   83926 start.go:340] cluster config:
	{Name:download-only-501629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-501629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:49.423766   83926 out.go:97] Starting "download-only-501629" primary control-plane node in "download-only-501629" cluster
	I0819 11:56:49.423809   83926 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 11:56:49.424871   83926 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:56:49.424894   83926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 11:56:49.425025   83926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:56:49.440785   83926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:56:49.440964   83926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:56:49.441041   83926 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:56:49.532630   83926 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:56:49.532662   83926 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:49.532846   83926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 11:56:49.534588   83926 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:56:49.534605   83926 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 11:56:50.084869   83926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:57:03.386356   83926 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 11:57:03.386462   83926 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 11:57:03.418603   83926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-501629 host does not exist
	  To start a cluster, run: "minikube start -p download-only-501629"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-501629
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-800839 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-800839 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.102383669s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-800839
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-800839: exit status 85 (59.948856ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-501629 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | -p download-only-501629        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| delete  | -p download-only-501629        | download-only-501629 | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC | 19 Aug 24 11:57 UTC |
	| start   | -o=json --download-only        | download-only-800839 | jenkins | v1.33.1 | 19 Aug 24 11:57 UTC |                     |
	|         | -p download-only-800839        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:57:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:57:05.581094   84300 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:05.581585   84300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:05.581603   84300 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:05.581611   84300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:05.582053   84300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 11:57:05.583036   84300 out.go:352] Setting JSON to true
	I0819 11:57:05.583894   84300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5921,"bootTime":1724062705,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:57:05.583959   84300 start.go:139] virtualization: kvm guest
	I0819 11:57:05.586080   84300 out.go:97] [download-only-800839] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:57:05.586252   84300 notify.go:220] Checking for updates...
	I0819 11:57:05.587626   84300 out.go:169] MINIKUBE_LOCATION=19479
	I0819 11:57:05.588932   84300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:05.590219   84300 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 11:57:05.591486   84300 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 11:57:05.592715   84300 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 11:57:05.595058   84300 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:57:05.595303   84300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:57:05.615960   84300 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:57:05.616058   84300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:05.660213   84300 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:05.651323696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:05.660323   84300 docker.go:307] overlay module found
	I0819 11:57:05.661944   84300 out.go:97] Using the docker driver based on user configuration
	I0819 11:57:05.661970   84300 start.go:297] selected driver: docker
	I0819 11:57:05.661985   84300 start.go:901] validating driver "docker" against <nil>
	I0819 11:57:05.662079   84300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:57:05.709747   84300 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:57:05.700137279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 11:57:05.709994   84300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:57:05.710644   84300 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 11:57:05.710857   84300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:57:05.712674   84300 out.go:169] Using Docker driver with root privileges
	I0819 11:57:05.713797   84300 cni.go:84] Creating CNI manager for ""
	I0819 11:57:05.713816   84300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 11:57:05.713825   84300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:57:05.713920   84300 start.go:340] cluster config:
	{Name:download-only-800839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-800839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:05.715155   84300 out.go:97] Starting "download-only-800839" primary control-plane node in "download-only-800839" cluster
	I0819 11:57:05.715174   84300 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 11:57:05.716224   84300 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:57:05.716246   84300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:05.716291   84300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:57:05.731787   84300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:57:05.731911   84300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:57:05.731927   84300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 11:57:05.731932   84300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 11:57:05.731939   84300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 11:57:05.824358   84300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:57:05.824388   84300 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:05.824548   84300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:05.826249   84300 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 11:57:05.826269   84300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 11:57:05.938870   84300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19479-77145/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-800839 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800839"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-800839
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-335603 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-335603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-335603
--- PASS: TestDownloadOnlyKic (1.06s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-139959 --alsologtostderr --binary-mirror http://127.0.0.1:45177 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-139959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-139959
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (52.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-538909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-538909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (50.679102544s)
helpers_test.go:175: Cleaning up "offline-crio-538909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-538909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-538909: (2.282078473s)
--- PASS: TestOffline (52.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-010148
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-010148: exit status 85 (47.795886ms)

                                                
                                                
-- stdout --
	* Profile "addons-010148" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010148"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-010148
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-010148: exit status 85 (47.595794ms)

                                                
                                                
-- stdout --
	* Profile "addons-010148" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010148"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (182.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-010148 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-010148 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m2.657898965s)
--- PASS: TestAddons/Setup (182.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-010148 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-010148 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.440082ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-vzmzk" [f04fc68c-2fa9-46e6-a18d-49a1a8a81968] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0036851s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zddbz" [59ab7eba-4de5-4dd0-b7df-ee19cd688277] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003308423s
addons_test.go:342: (dbg) Run:  kubectl --context addons-010148 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-010148 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-010148 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.659080089s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 ip
2024/08/19 12:00:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-89vvk" [0cd6d8f4-9d02-4f4b-857c-3d5fe629cc73] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004703461s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-010148
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-010148: (5.648387306s)
--- PASS: TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.546122ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-99f2d" [a79cfc7e-dad8-4740-8386-760769073d6b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003955285s
addons_test.go:475: (dbg) Run:  kubectl --context addons-010148 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-010148 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.041107877s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 18.758188ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-010148 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-010148 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ecb372e4-635b-4a01-b939-fbacb76051d9] Pending
helpers_test.go:344: "task-pv-pod" [ecb372e4-635b-4a01-b939-fbacb76051d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ecb372e4-635b-4a01-b939-fbacb76051d9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004114413s
addons_test.go:590: (dbg) Run:  kubectl --context addons-010148 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-010148 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-010148 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-010148 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-010148 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-010148 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-010148 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7c31b258-dac9-4e9b-a526-d33493dce6bd] Pending
helpers_test.go:344: "task-pv-pod-restore" [7c31b258-dac9-4e9b-a526-d33493dce6bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7c31b258-dac9-4e9b-a526-d33493dce6bd] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.003560258s
addons_test.go:632: (dbg) Run:  kubectl --context addons-010148 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-010148 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-010148 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.727061265s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-010148 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-c9t92" [3cf74eb9-2e83-4a54-b40d-c40ae5334727] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-c9t92" [3cf74eb9-2e83-4a54-b40d-c40ae5334727] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003162424s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable headlamp --alsologtostderr -v=1: (5.637126175s)
--- PASS: TestAddons/parallel/Headlamp (18.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-d6zcp" [d59dec4c-e2a7-482c-95ef-4f9e29268471] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003534597s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-010148
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-010148 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-010148 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7a20f299-6705-4097-9228-0f38ff24f2fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7a20f299-6705-4097-9228-0f38ff24f2fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7a20f299-6705-4097-9228-0f38ff24f2fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.003772874s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-010148 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 ssh "cat /opt/local-path-provisioner/pvc-520035d6-e6c6-424a-94a4-de8464c48f46_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-010148 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-010148 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.095789098s)
--- PASS: TestAddons/parallel/LocalPath (60.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9gfqj" [780617de-6822-48b4-bc3f-20932c2c5681] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002989997s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-010148
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-f6ql9" [e83af1a9-a51e-4595-8eee-6a048e69d973] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004033066s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-010148 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-010148 addons disable yakd --alsologtostderr -v=1: (5.840123255s)
--- PASS: TestAddons/parallel/Yakd (10.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-010148
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-010148: (11.794912487s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-010148
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-010148
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-010148
--- PASS: TestAddons/StoppedEnableDisable (12.03s)

                                                
                                    
x
+
TestCertOptions (31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-886584 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-886584 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.558418857s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-886584 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-886584 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-886584 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-886584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-886584
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-886584: (3.809718581s)
--- PASS: TestCertOptions (31.00s)

                                                
                                    
x
+
TestCertExpiration (220.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-966550 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-966550 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.713527983s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-966550 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-966550 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.722822634s)
helpers_test.go:175: Cleaning up "cert-expiration-966550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-966550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-966550: (2.22177124s)
--- PASS: TestCertExpiration (220.66s)

                                                
                                    
x
+
TestForceSystemdFlag (26.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-340301 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-340301 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.671871468s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-340301 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-340301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-340301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-340301: (2.366549951s)
--- PASS: TestForceSystemdFlag (26.29s)

                                                
                                    
x
+
TestForceSystemdEnv (29.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-033817 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-033817 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.836054506s)
helpers_test.go:175: Cleaning up "force-systemd-env-033817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-033817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-033817: (4.102364544s)
--- PASS: TestForceSystemdEnv (29.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.6s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.60s)

                                                
                                    
x
+
TestErrorSpam/setup (23.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-279368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279368 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-279368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279368 --driver=docker  --container-runtime=crio: (23.006784206s)
--- PASS: TestErrorSpam/setup (23.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 stop: (1.166212296s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-279368 --log_dir /tmp/nospam-279368 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19479-77145/.minikube/files/etc/test/nested/copy/83914/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-791037 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.813370044s)
--- PASS: TestFunctional/serial/StartWithProxy (41.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-791037 --alsologtostderr -v=8: (33.851054485s)
functional_test.go:663: soft start took 33.851798637s for "functional-791037" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-791037 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 cache add registry.k8s.io/pause:3.3: (1.038223294s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-791037 /tmp/TestFunctionalserialCacheCmdcacheadd_local3012328573/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache add minikube-local-cache-test:functional-791037
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 cache add minikube-local-cache-test:functional-791037: (1.772758187s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache delete minikube-local-cache-test:functional-791037
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-791037
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (257.444906ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 kubectl -- --context functional-791037 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-791037 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-791037 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.978426194s)
functional_test.go:761: restart took 35.978546338s for "functional-791037" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-791037 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 logs: (1.307405187s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 logs --file /tmp/TestFunctionalserialLogsFileCmd3976066329/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 logs --file /tmp/TestFunctionalserialLogsFileCmd3976066329/001/logs.txt: (1.304554067s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-791037 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-791037
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-791037: exit status 115 (309.818101ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30307 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-791037 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 config get cpus: exit status 14 (72.5849ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 config get cpus: exit status 14 (56.315363ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-791037 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-791037 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 125716: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-791037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (133.89108ms)

                                                
                                                
-- stdout --
	* [functional-791037] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:09:14.330108  122536 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:09:14.330228  122536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:09:14.330237  122536 out.go:358] Setting ErrFile to fd 2...
	I0819 12:09:14.330242  122536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:09:14.330424  122536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:09:14.330966  122536 out.go:352] Setting JSON to false
	I0819 12:09:14.332061  122536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6649,"bootTime":1724062705,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:09:14.332126  122536 start.go:139] virtualization: kvm guest
	I0819 12:09:14.334218  122536 out.go:177] * [functional-791037] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:09:14.335540  122536 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:09:14.335591  122536 notify.go:220] Checking for updates...
	I0819 12:09:14.337825  122536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:09:14.339190  122536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 12:09:14.340478  122536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 12:09:14.341788  122536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:09:14.343160  122536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:09:14.344708  122536 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:09:14.345211  122536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:09:14.367323  122536 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:09:14.367470  122536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:09:14.413919  122536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-19 12:09:14.404647843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 12:09:14.414035  122536 docker.go:307] overlay module found
	I0819 12:09:14.415857  122536 out.go:177] * Using the docker driver based on existing profile
	I0819 12:09:14.417281  122536 start.go:297] selected driver: docker
	I0819 12:09:14.417302  122536 start.go:901] validating driver "docker" against &{Name:functional-791037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-791037 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:09:14.417395  122536 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:09:14.419370  122536 out.go:201] 
	W0819 12:09:14.420456  122536 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 12:09:14.421728  122536 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-791037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-791037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (151.994007ms)

                                                
                                                
-- stdout --
	* [functional-791037] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:09:14.674564  122756 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:09:14.674695  122756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:09:14.674706  122756 out.go:358] Setting ErrFile to fd 2...
	I0819 12:09:14.674712  122756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:09:14.675033  122756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:09:14.675596  122756 out.go:352] Setting JSON to false
	I0819 12:09:14.676605  122756 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6650,"bootTime":1724062705,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:09:14.676668  122756 start.go:139] virtualization: kvm guest
	I0819 12:09:14.679072  122756 out.go:177] * [functional-791037] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 12:09:14.680792  122756 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:09:14.680853  122756 notify.go:220] Checking for updates...
	I0819 12:09:14.683687  122756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:09:14.685047  122756 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 12:09:14.686362  122756 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 12:09:14.687701  122756 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:09:14.689307  122756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:09:14.691157  122756 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:09:14.691664  122756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:09:14.715868  122756 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:09:14.716005  122756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:09:14.768784  122756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-08-19 12:09:14.759410475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 12:09:14.768897  122756 docker.go:307] overlay module found
	I0819 12:09:14.771614  122756 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 12:09:14.772731  122756 start.go:297] selected driver: docker
	I0819 12:09:14.772753  122756 start.go:901] validating driver "docker" against &{Name:functional-791037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-791037 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:09:14.772870  122756 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:09:14.774843  122756 out.go:201] 
	W0819 12:09:14.775965  122756 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 12:09:14.777091  122756 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-791037 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-791037 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j2vkh" [96b1d84c-e9cd-40d8-b9a0-95c10e14a250] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j2vkh" [96b1d84c-e9cd-40d8-b9a0-95c10e14a250] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003174159s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32498
functional_test.go:1675: http://192.168.49.2:32498: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-j2vkh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32498
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3435aade-5548-4c61-b08a-2fd273670406] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004113113s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-791037 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-791037 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-791037 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-791037 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [484b134a-e780-41b8-8275-fb0dec06e23d] Pending
helpers_test.go:344: "sp-pod" [484b134a-e780-41b8-8275-fb0dec06e23d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [484b134a-e780-41b8-8275-fb0dec06e23d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004489075s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-791037 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-791037 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-791037 delete -f testdata/storage-provisioner/pod.yaml: (1.04733847s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-791037 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [51d997ca-4c54-4987-9bd4-39f96fea81a4] Pending
helpers_test.go:344: "sp-pod" [51d997ca-4c54-4987-9bd4-39f96fea81a4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [51d997ca-4c54-4987-9bd4-39f96fea81a4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.010878842s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-791037 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh -n functional-791037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cp functional-791037:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1975838860/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh -n functional-791037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh -n functional-791037 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-791037 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-fz48z" [91febfbc-ba04-4799-a2ed-433e2deb820a] Pending
helpers_test.go:344: "mysql-6cdb49bbb-fz48z" [91febfbc-ba04-4799-a2ed-433e2deb820a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-fz48z" [91febfbc-ba04-4799-a2ed-433e2deb820a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004369352s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-791037 exec mysql-6cdb49bbb-fz48z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-791037 exec mysql-6cdb49bbb-fz48z -- mysql -ppassword -e "show databases;": exit status 1 (183.011987ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-791037 exec mysql-6cdb49bbb-fz48z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-791037 exec mysql-6cdb49bbb-fz48z -- mysql -ppassword -e "show databases;": exit status 1 (117.961027ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/08/19 12:09:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1807: (dbg) Run:  kubectl --context functional-791037 exec mysql-6cdb49bbb-fz48z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/83914/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /etc/test/nested/copy/83914/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/83914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /etc/ssl/certs/83914.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/83914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /usr/share/ca-certificates/83914.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/839142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /etc/ssl/certs/839142.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/839142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /usr/share/ca-certificates/839142.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-791037 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "sudo systemctl is-active docker": exit status 1 (299.102801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "sudo systemctl is-active containerd": exit status 1 (249.116021ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-791037 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-791037 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bww9h" [98d5cdd7-0749-4c15-b85b-efe0ff015847] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bww9h" [98d5cdd7-0749-4c15-b85b-efe0ff015847] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00342195s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 119633: os: process already finished
helpers_test.go:502: unable to terminate pid 119305: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-791037 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9cbe99c2-a5d9-40a3-b291-20c9027a2eb7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9cbe99c2-a5d9-40a3-b291-20c9027a2eb7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003531334s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service list -o json
functional_test.go:1494: Took "579.704545ms" to run "out/minikube-linux-amd64 -p functional-791037 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30157
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30157
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-791037 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-791037
localhost/kicbase/echo-server:functional-791037
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-791037 image ls --format short --alsologtostderr:
I0819 12:09:27.507724  127012 out.go:345] Setting OutFile to fd 1 ...
I0819 12:09:27.507838  127012 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:27.507847  127012 out.go:358] Setting ErrFile to fd 2...
I0819 12:09:27.507851  127012 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:27.508063  127012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
I0819 12:09:27.508598  127012 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:27.508695  127012 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:27.509063  127012 cli_runner.go:164] Run: docker container inspect functional-791037 --format={{.State.Status}}
I0819 12:09:27.526353  127012 ssh_runner.go:195] Run: systemctl --version
I0819 12:09:27.526398  127012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-791037
I0819 12:09:27.544990  127012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/functional-791037/id_rsa Username:docker}
I0819 12:09:27.630152  127012 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-791037 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-791037  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-791037  | b4df9ad58a4c4 | 3.33kB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-791037  | cd6e84c36e401 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-791037 image ls --format table --alsologtostderr:
I0819 12:09:32.636292  127911 out.go:345] Setting OutFile to fd 1 ...
I0819 12:09:32.636398  127911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:32.636408  127911 out.go:358] Setting ErrFile to fd 2...
I0819 12:09:32.636413  127911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:32.636627  127911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
I0819 12:09:32.637232  127911 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:32.637343  127911 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:32.637718  127911 cli_runner.go:164] Run: docker container inspect functional-791037 --format={{.State.Status}}
I0819 12:09:32.654642  127911 ssh_runner.go:195] Run: systemctl --version
I0819 12:09:32.654699  127911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-791037
I0819 12:09:32.671734  127911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/functional-791037/id_rsa Username:docker}
I0819 12:09:32.754210  127911 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-791037 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"
repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kub
e-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f
5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],
"repoTags":["localhost/kicbase/echo-server:functional-791037"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"
id":"cd6e84c36e401554600a0560d16502887e6252a0b856d6daa04990cfb379f2ca","repoDigests":["localhost/my-image@sha256:48c05f2eb361c121069f125a3fffc9bd7a535353809da00e22b1a1eb56ed06f1"],"repoTags":["localhost/my-image:functional-791037"],"size":"1468194"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"0636cbff18b4482ce8ee0c6c72b6ac6fa6e4edefcb6e45c6934d4cdbe5e2c7ae","repoDigests":["docker.io/library/035f9ac810153d18e413110b89d9157c6a49d97847d890716d533e341146e0dc
-tmp@sha256:51eb3efb3abeb008c6c2bea9e67ac9637178553aa43a8e527ebc16cdf4ac229f"],"repoTags":[],"size":"1465612"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b4df9ad58a4c457eefbf955ad5e35b8da4945656806af130c97bfcbad1e23ac3","repoDigests":["localhost/minikube-local-cache-test@sha256:b5f9d396db5fd593383ae94a77391c71de671fcf2cae0c0cd21e175bcff1000f"],"repoTags":["localhost/minikube-local-cache-test:functional-791037"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:21
69b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-791037 image ls --format json --alsologtostderr:
I0819 12:09:32.436651  127859 out.go:345] Setting OutFile to fd 1 ...
I0819 12:09:32.436760  127859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:32.436770  127859 out.go:358] Setting ErrFile to fd 2...
I0819 12:09:32.436774  127859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:32.436946  127859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
I0819 12:09:32.437493  127859 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:32.437590  127859 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:32.438025  127859 cli_runner.go:164] Run: docker container inspect functional-791037 --format={{.State.Status}}
I0819 12:09:32.455111  127859 ssh_runner.go:195] Run: systemctl --version
I0819 12:09:32.455156  127859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-791037
I0819 12:09:32.471459  127859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/functional-791037/id_rsa Username:docker}
I0819 12:09:32.558232  127859 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-791037 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-791037
size: "4943877"
- id: b4df9ad58a4c457eefbf955ad5e35b8da4945656806af130c97bfcbad1e23ac3
repoDigests:
- localhost/minikube-local-cache-test@sha256:b5f9d396db5fd593383ae94a77391c71de671fcf2cae0c0cd21e175bcff1000f
repoTags:
- localhost/minikube-local-cache-test:functional-791037
size: "3330"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-791037 image ls --format yaml --alsologtostderr:
I0819 12:09:27.721626  127095 out.go:345] Setting OutFile to fd 1 ...
I0819 12:09:27.721982  127095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:27.721993  127095 out.go:358] Setting ErrFile to fd 2...
I0819 12:09:27.721997  127095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:27.722243  127095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
I0819 12:09:27.722857  127095 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:27.722971  127095 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:27.723367  127095 cli_runner.go:164] Run: docker container inspect functional-791037 --format={{.State.Status}}
I0819 12:09:27.739722  127095 ssh_runner.go:195] Run: systemctl --version
I0819 12:09:27.739768  127095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-791037
I0819 12:09:27.762700  127095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/functional-791037/id_rsa Username:docker}
I0819 12:09:27.850058  127095 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh pgrep buildkitd: exit status 1 (245.51233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image build -t localhost/my-image:functional-791037 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 image build -t localhost/my-image:functional-791037 testdata/build --alsologtostderr: (4.040457566s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-791037 image build -t localhost/my-image:functional-791037 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0636cbff18b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-791037
--> cd6e84c36e4
Successfully tagged localhost/my-image:functional-791037
cd6e84c36e401554600a0560d16502887e6252a0b856d6daa04990cfb379f2ca
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-791037 image build -t localhost/my-image:functional-791037 testdata/build --alsologtostderr:
I0819 12:09:28.186052  127267 out.go:345] Setting OutFile to fd 1 ...
I0819 12:09:28.186311  127267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:28.186320  127267 out.go:358] Setting ErrFile to fd 2...
I0819 12:09:28.186324  127267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:09:28.186523  127267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
I0819 12:09:28.187093  127267 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:28.187720  127267 config.go:182] Loaded profile config "functional-791037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:09:28.188166  127267 cli_runner.go:164] Run: docker container inspect functional-791037 --format={{.State.Status}}
I0819 12:09:28.205720  127267 ssh_runner.go:195] Run: systemctl --version
I0819 12:09:28.205765  127267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-791037
I0819 12:09:28.222406  127267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/functional-791037/id_rsa Username:docker}
I0819 12:09:28.306463  127267 build_images.go:161] Building image from path: /tmp/build.3520148591.tar
I0819 12:09:28.306534  127267 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 12:09:28.315953  127267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3520148591.tar
I0819 12:09:28.319688  127267 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3520148591.tar: stat -c "%s %y" /var/lib/minikube/build/build.3520148591.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3520148591.tar': No such file or directory
I0819 12:09:28.319713  127267 ssh_runner.go:362] scp /tmp/build.3520148591.tar --> /var/lib/minikube/build/build.3520148591.tar (3072 bytes)
I0819 12:09:28.351827  127267 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3520148591
I0819 12:09:28.361868  127267 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3520148591 -xf /var/lib/minikube/build/build.3520148591.tar
I0819 12:09:28.370351  127267 crio.go:315] Building image: /var/lib/minikube/build/build.3520148591
I0819 12:09:28.370455  127267 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-791037 /var/lib/minikube/build/build.3520148591 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 12:09:32.159886  127267 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-791037 /var/lib/minikube/build/build.3520148591 --cgroup-manager=cgroupfs: (3.789397672s)
I0819 12:09:32.159975  127267 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3520148591
I0819 12:09:32.168424  127267 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3520148591.tar
I0819 12:09:32.176372  127267 build_images.go:217] Built localhost/my-image:functional-791037 from /tmp/build.3520148591.tar
I0819 12:09:32.176407  127267 build_images.go:133] succeeded building to: functional-791037
I0819 12:09:32.176413  127267 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.004177191s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-791037
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-791037 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.168.243 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-791037 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "309.243663ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.124308ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdany-port3441537052/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724069353905298527" to /tmp/TestFunctionalparallelMountCmdany-port3441537052/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724069353905298527" to /tmp/TestFunctionalparallelMountCmdany-port3441537052/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724069353905298527" to /tmp/TestFunctionalparallelMountCmdany-port3441537052/001/test-1724069353905298527
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.24122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 12:09 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 12:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 12:09 test-1724069353905298527
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh cat /mount-9p/test-1724069353905298527
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-791037 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [910927db-ea88-40d7-a9cb-d2dc851abb25] Pending
helpers_test.go:344: "busybox-mount" [910927db-ea88-40d7-a9cb-d2dc851abb25] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [910927db-ea88-40d7-a9cb-d2dc851abb25] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [910927db-ea88-40d7-a9cb-d2dc851abb25] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003460316s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-791037 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdany-port3441537052/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "311.165599ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.625686ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image load --daemon kicbase/echo-server:functional-791037 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-791037 image load --daemon kicbase/echo-server:functional-791037 --alsologtostderr: (1.152481795s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image load --daemon kicbase/echo-server:functional-791037 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-791037
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image load --daemon kicbase/echo-server:functional-791037 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image save kicbase/echo-server:functional-791037 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image rm kicbase/echo-server:functional-791037 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-791037
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 image save --daemon kicbase/echo-server:functional-791037 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-791037
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdspecific-port1878159990/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.413709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdspecific-port1878159990/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "sudo umount -f /mount-9p": exit status 1 (390.916008ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-791037 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdspecific-port1878159990/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T" /mount1: exit status 1 (584.807091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-791037 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-791037 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-791037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1555593448/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-791037
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-791037
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-791037
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-567188 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 12:10:24.071038   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.078229   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.089702   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.112435   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.154713   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.236143   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.398173   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:24.720076   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:25.362143   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:26.643901   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:29.205449   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:34.327018   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:10:44.569076   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:11:05.050785   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-567188 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m47.321332356s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (107.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-567188 -- rollout status deployment/busybox: (3.778178803s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-bskpf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-ct7rq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-gd95w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-bskpf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-ct7rq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-gd95w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-bskpf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-ct7rq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-gd95w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-bskpf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-bskpf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-ct7rq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-ct7rq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-gd95w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-567188 -- exec busybox-7dff88458-gd95w -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-567188 -v=7 --alsologtostderr
E0819 12:11:46.012689   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-567188 -v=7 --alsologtostderr: (36.132877195s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-567188 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp testdata/cp-test.txt ha-567188:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362376969/001/cp-test_ha-567188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188:/home/docker/cp-test.txt ha-567188-m02:/home/docker/cp-test_ha-567188_ha-567188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test_ha-567188_ha-567188-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188:/home/docker/cp-test.txt ha-567188-m03:/home/docker/cp-test_ha-567188_ha-567188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test_ha-567188_ha-567188-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188:/home/docker/cp-test.txt ha-567188-m04:/home/docker/cp-test_ha-567188_ha-567188-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test_ha-567188_ha-567188-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp testdata/cp-test.txt ha-567188-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362376969/001/cp-test_ha-567188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m02:/home/docker/cp-test.txt ha-567188:/home/docker/cp-test_ha-567188-m02_ha-567188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test_ha-567188-m02_ha-567188.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m02:/home/docker/cp-test.txt ha-567188-m03:/home/docker/cp-test_ha-567188-m02_ha-567188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test_ha-567188-m02_ha-567188-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m02:/home/docker/cp-test.txt ha-567188-m04:/home/docker/cp-test_ha-567188-m02_ha-567188-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test_ha-567188-m02_ha-567188-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp testdata/cp-test.txt ha-567188-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362376969/001/cp-test_ha-567188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m03:/home/docker/cp-test.txt ha-567188:/home/docker/cp-test_ha-567188-m03_ha-567188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test_ha-567188-m03_ha-567188.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m03:/home/docker/cp-test.txt ha-567188-m02:/home/docker/cp-test_ha-567188-m03_ha-567188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test_ha-567188-m03_ha-567188-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m03:/home/docker/cp-test.txt ha-567188-m04:/home/docker/cp-test_ha-567188-m03_ha-567188-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test_ha-567188-m03_ha-567188-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp testdata/cp-test.txt ha-567188-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362376969/001/cp-test_ha-567188-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m04:/home/docker/cp-test.txt ha-567188:/home/docker/cp-test_ha-567188-m04_ha-567188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188 "sudo cat /home/docker/cp-test_ha-567188-m04_ha-567188.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m04:/home/docker/cp-test.txt ha-567188-m02:/home/docker/cp-test_ha-567188-m04_ha-567188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m02 "sudo cat /home/docker/cp-test_ha-567188-m04_ha-567188-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 cp ha-567188-m04:/home/docker/cp-test.txt ha-567188-m03:/home/docker/cp-test_ha-567188-m04_ha-567188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 ssh -n ha-567188-m03 "sudo cat /home/docker/cp-test_ha-567188-m04_ha-567188-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-567188 node stop m02 -v=7 --alsologtostderr: (11.800411398s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr: exit status 7 (623.416893ms)

                                                
                                                
-- stdout --
	ha-567188
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-567188-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-567188-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-567188-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:12:45.357127  148976 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:12:45.357418  148976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:45.357428  148976 out.go:358] Setting ErrFile to fd 2...
	I0819 12:12:45.357433  148976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:45.357631  148976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:12:45.357862  148976 out.go:352] Setting JSON to false
	I0819 12:12:45.357901  148976 mustload.go:65] Loading cluster: ha-567188
	I0819 12:12:45.358008  148976 notify.go:220] Checking for updates...
	I0819 12:12:45.358361  148976 config.go:182] Loaded profile config "ha-567188": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:45.358379  148976 status.go:255] checking status of ha-567188 ...
	I0819 12:12:45.358805  148976 cli_runner.go:164] Run: docker container inspect ha-567188 --format={{.State.Status}}
	I0819 12:12:45.377097  148976 status.go:330] ha-567188 host status = "Running" (err=<nil>)
	I0819 12:12:45.377143  148976 host.go:66] Checking if "ha-567188" exists ...
	I0819 12:12:45.377409  148976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-567188
	I0819 12:12:45.395603  148976 host.go:66] Checking if "ha-567188" exists ...
	I0819 12:12:45.395875  148976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:12:45.395924  148976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-567188
	I0819 12:12:45.413554  148976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/ha-567188/id_rsa Username:docker}
	I0819 12:12:45.498901  148976 ssh_runner.go:195] Run: systemctl --version
	I0819 12:12:45.502912  148976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:12:45.513789  148976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:12:45.565579  148976 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-19 12:12:45.556713504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 12:12:45.566187  148976 kubeconfig.go:125] found "ha-567188" server: "https://192.168.49.254:8443"
	I0819 12:12:45.566219  148976 api_server.go:166] Checking apiserver status ...
	I0819 12:12:45.566257  148976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:12:45.577254  148976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1484/cgroup
	I0819 12:12:45.585663  148976 api_server.go:182] apiserver freezer: "13:freezer:/docker/c919949cffd067de0174a03e70b879414b747dab382d85eec3acf9d124d2a659/crio/crio-46d95a39416e928d1133f208ddc9fcb4e720297465fb3d91789dc1632d9c3e91"
	I0819 12:12:45.585717  148976 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c919949cffd067de0174a03e70b879414b747dab382d85eec3acf9d124d2a659/crio/crio-46d95a39416e928d1133f208ddc9fcb4e720297465fb3d91789dc1632d9c3e91/freezer.state
	I0819 12:12:45.593284  148976 api_server.go:204] freezer state: "THAWED"
	I0819 12:12:45.593307  148976 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 12:12:45.598135  148976 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 12:12:45.598163  148976 status.go:422] ha-567188 apiserver status = Running (err=<nil>)
	I0819 12:12:45.598181  148976 status.go:257] ha-567188 status: &{Name:ha-567188 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:12:45.598202  148976 status.go:255] checking status of ha-567188-m02 ...
	I0819 12:12:45.598431  148976 cli_runner.go:164] Run: docker container inspect ha-567188-m02 --format={{.State.Status}}
	I0819 12:12:45.615426  148976 status.go:330] ha-567188-m02 host status = "Stopped" (err=<nil>)
	I0819 12:12:45.615448  148976 status.go:343] host is not running, skipping remaining checks
	I0819 12:12:45.615454  148976 status.go:257] ha-567188-m02 status: &{Name:ha-567188-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:12:45.615481  148976 status.go:255] checking status of ha-567188-m03 ...
	I0819 12:12:45.615735  148976 cli_runner.go:164] Run: docker container inspect ha-567188-m03 --format={{.State.Status}}
	I0819 12:12:45.632765  148976 status.go:330] ha-567188-m03 host status = "Running" (err=<nil>)
	I0819 12:12:45.632792  148976 host.go:66] Checking if "ha-567188-m03" exists ...
	I0819 12:12:45.633044  148976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-567188-m03
	I0819 12:12:45.649303  148976 host.go:66] Checking if "ha-567188-m03" exists ...
	I0819 12:12:45.649568  148976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:12:45.649620  148976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-567188-m03
	I0819 12:12:45.666956  148976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/ha-567188-m03/id_rsa Username:docker}
	I0819 12:12:45.750880  148976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:12:45.761256  148976 kubeconfig.go:125] found "ha-567188" server: "https://192.168.49.254:8443"
	I0819 12:12:45.761286  148976 api_server.go:166] Checking apiserver status ...
	I0819 12:12:45.761330  148976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:12:45.770985  148976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	I0819 12:12:45.779014  148976 api_server.go:182] apiserver freezer: "13:freezer:/docker/e6bf953ea6184d07ce944dbea3180f2c1d839a7bcb64559e3baeac5a64125d34/crio/crio-3317c72b892f23387d61659d76677d8a44074ef179cf13aacea56636c07a1d33"
	I0819 12:12:45.779082  148976 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e6bf953ea6184d07ce944dbea3180f2c1d839a7bcb64559e3baeac5a64125d34/crio/crio-3317c72b892f23387d61659d76677d8a44074ef179cf13aacea56636c07a1d33/freezer.state
	I0819 12:12:45.786441  148976 api_server.go:204] freezer state: "THAWED"
	I0819 12:12:45.786464  148976 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 12:12:45.789955  148976 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 12:12:45.789976  148976 status.go:422] ha-567188-m03 apiserver status = Running (err=<nil>)
	I0819 12:12:45.789984  148976 status.go:257] ha-567188-m03 status: &{Name:ha-567188-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:12:45.789999  148976 status.go:255] checking status of ha-567188-m04 ...
	I0819 12:12:45.790237  148976 cli_runner.go:164] Run: docker container inspect ha-567188-m04 --format={{.State.Status}}
	I0819 12:12:45.807624  148976 status.go:330] ha-567188-m04 host status = "Running" (err=<nil>)
	I0819 12:12:45.807647  148976 host.go:66] Checking if "ha-567188-m04" exists ...
	I0819 12:12:45.807886  148976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-567188-m04
	I0819 12:12:45.824182  148976 host.go:66] Checking if "ha-567188-m04" exists ...
	I0819 12:12:45.824546  148976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:12:45.824585  148976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-567188-m04
	I0819 12:12:45.841152  148976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/ha-567188-m04/id_rsa Username:docker}
	I0819 12:12:45.926761  148976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:12:45.937013  148976 status.go:257] ha-567188-m04 status: &{Name:ha-567188-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-567188 node start m02 -v=7 --alsologtostderr: (18.866496965s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0819 12:13:07.934029   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (7.899627469s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (194.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-567188 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-567188 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-567188 -v=7 --alsologtostderr: (36.575971249s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-567188 --wait=true -v=7 --alsologtostderr
E0819 12:13:58.945890   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:58.952314   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:58.963798   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:58.985287   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:59.026793   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:59.108345   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:59.269920   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:13:59.591790   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:00.233999   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:01.515597   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:04.076863   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:09.198240   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:19.440230   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:39.922504   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:15:20.884045   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:15:24.070386   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:15:51.775952   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-567188 --wait=true -v=7 --alsologtostderr: (2m37.509055216s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-567188
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (194.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-567188 node delete m03 -v=7 --alsologtostderr: (11.34843673s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 stop -v=7 --alsologtostderr
E0819 12:16:42.805884   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-567188 stop -v=7 --alsologtostderr: (35.434109488s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr: exit status 7 (99.534402ms)

                                                
                                                
-- stdout --
	ha-567188
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-567188-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-567188-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:17:16.242794  166755 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:17:16.242913  166755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:17:16.242918  166755 out.go:358] Setting ErrFile to fd 2...
	I0819 12:17:16.242922  166755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:17:16.243085  166755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:17:16.243261  166755 out.go:352] Setting JSON to false
	I0819 12:17:16.243289  166755 mustload.go:65] Loading cluster: ha-567188
	I0819 12:17:16.243442  166755 notify.go:220] Checking for updates...
	I0819 12:17:16.243667  166755 config.go:182] Loaded profile config "ha-567188": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:17:16.243683  166755 status.go:255] checking status of ha-567188 ...
	I0819 12:17:16.244092  166755 cli_runner.go:164] Run: docker container inspect ha-567188 --format={{.State.Status}}
	I0819 12:17:16.262119  166755 status.go:330] ha-567188 host status = "Stopped" (err=<nil>)
	I0819 12:17:16.262144  166755 status.go:343] host is not running, skipping remaining checks
	I0819 12:17:16.262153  166755 status.go:257] ha-567188 status: &{Name:ha-567188 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:17:16.262191  166755 status.go:255] checking status of ha-567188-m02 ...
	I0819 12:17:16.262443  166755 cli_runner.go:164] Run: docker container inspect ha-567188-m02 --format={{.State.Status}}
	I0819 12:17:16.280857  166755 status.go:330] ha-567188-m02 host status = "Stopped" (err=<nil>)
	I0819 12:17:16.280898  166755 status.go:343] host is not running, skipping remaining checks
	I0819 12:17:16.280905  166755 status.go:257] ha-567188-m02 status: &{Name:ha-567188-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:17:16.280936  166755 status.go:255] checking status of ha-567188-m04 ...
	I0819 12:17:16.281196  166755 cli_runner.go:164] Run: docker container inspect ha-567188-m04 --format={{.State.Status}}
	I0819 12:17:16.298395  166755 status.go:330] ha-567188-m04 host status = "Stopped" (err=<nil>)
	I0819 12:17:16.298425  166755 status.go:343] host is not running, skipping remaining checks
	I0819 12:17:16.298432  166755 status.go:257] ha-567188-m04 status: &{Name:ha-567188-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (112.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-567188 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 12:18:58.946161   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-567188 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m51.669065002s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (112.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-567188 --control-plane -v=7 --alsologtostderr
E0819 12:19:26.647587   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-567188 --control-plane -v=7 --alsologtostderr: (38.000897872s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-567188 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-757437 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0819 12:20:24.071231   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-757437 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (41.723258226s)
--- PASS: TestJSONOutput/start/Command (41.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-757437 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-757437 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-757437 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-757437 --output=json --user=testUser: (5.70868736s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-432323 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-432323 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.827256ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46ead1e1-0205-40a4-ba12-5734419eb119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-432323] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec61af22-aa8a-4c7c-986f-ab5339af7655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"5aad987d-10b7-435c-af42-b7731196202c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e6ba8a5e-25ca-4210-ba88-ec8a884519fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig"}}
	{"specversion":"1.0","id":"f54920cf-2d4f-4fa4-9d60-347a0c408925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube"}}
	{"specversion":"1.0","id":"113a1c06-a40a-4bf3-9bcf-d504deb415f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6dd00e52-8471-4d21-8d2c-e8a4b21407d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f6cce34d-b142-4eb9-9528-bbcf7260cc4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-432323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-432323
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-039529 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-039529 --network=: (36.0552594s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-039529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-039529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-039529: (2.022276303s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-937510 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-937510 --network=bridge: (23.030839145s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-937510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-937510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-937510: (1.829795142s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.88s)

                                                
                                    
x
+
TestKicExistingNetwork (22.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-431718 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-431718 --network=existing-network: (20.370034887s)
helpers_test.go:175: Cleaning up "existing-network-431718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-431718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-431718: (1.814119736s)
--- PASS: TestKicExistingNetwork (22.33s)

                                                
                                    
x
+
TestKicCustomSubnet (23.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-105722 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-105722 --subnet=192.168.60.0/24: (21.330065569s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-105722 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-105722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-105722
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-105722: (2.057545367s)
--- PASS: TestKicCustomSubnet (23.41s)

                                                
                                    
x
+
TestKicStaticIP (22.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-033435 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-033435 --static-ip=192.168.200.200: (20.631913424s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-033435 ip
helpers_test.go:175: Cleaning up "static-ip-033435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-033435
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-033435: (1.963566904s)
--- PASS: TestKicStaticIP (22.72s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-744602 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-744602 --driver=docker  --container-runtime=crio: (24.051233475s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-747204 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-747204 --driver=docker  --container-runtime=crio: (20.711938703s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-744602
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-747204
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-747204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-747204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-747204: (1.860090931s)
helpers_test.go:175: Cleaning up "first-744602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-744602
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-744602: (2.168770369s)
--- PASS: TestMinikubeProfile (49.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-672916 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-672916 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.467467545s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-672916 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-687553 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0819 12:23:58.946578   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-687553 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.447923774s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-687553 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-672916 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-672916 --alsologtostderr -v=5: (1.577476613s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-687553 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-687553
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-687553: (1.168365952s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-687553
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-687553: (6.414996008s)
--- PASS: TestMountStart/serial/RestartStopped (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-687553 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844915 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 12:25:24.070639   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844915 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.68754413s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-844915 -- rollout status deployment/busybox: (3.743941828s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-948tg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-sms9w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-948tg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-sms9w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-948tg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-sms9w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-948tg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-948tg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-sms9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844915 -- exec busybox-7dff88458-sms9w -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-844915 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-844915 -v 3 --alsologtostderr: (27.518127464s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-844915 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp testdata/cp-test.txt multinode-844915:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile956225546/001/cp-test_multinode-844915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915:/home/docker/cp-test.txt multinode-844915-m02:/home/docker/cp-test_multinode-844915_multinode-844915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test_multinode-844915_multinode-844915-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915:/home/docker/cp-test.txt multinode-844915-m03:/home/docker/cp-test_multinode-844915_multinode-844915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test_multinode-844915_multinode-844915-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp testdata/cp-test.txt multinode-844915-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile956225546/001/cp-test_multinode-844915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m02:/home/docker/cp-test.txt multinode-844915:/home/docker/cp-test_multinode-844915-m02_multinode-844915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test_multinode-844915-m02_multinode-844915.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m02:/home/docker/cp-test.txt multinode-844915-m03:/home/docker/cp-test_multinode-844915-m02_multinode-844915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test_multinode-844915-m02_multinode-844915-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp testdata/cp-test.txt multinode-844915-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile956225546/001/cp-test_multinode-844915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m03:/home/docker/cp-test.txt multinode-844915:/home/docker/cp-test_multinode-844915-m03_multinode-844915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915 "sudo cat /home/docker/cp-test_multinode-844915-m03_multinode-844915.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 cp multinode-844915-m03:/home/docker/cp-test.txt multinode-844915-m02:/home/docker/cp-test_multinode-844915-m03_multinode-844915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 ssh -n multinode-844915-m02 "sudo cat /home/docker/cp-test_multinode-844915-m03_multinode-844915-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-844915 node stop m03: (1.167939935s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844915 status: exit status 7 (437.09085ms)

                                                
                                                
-- stdout --
	multinode-844915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr: exit status 7 (438.310689ms)

                                                
                                                
-- stdout --
	multinode-844915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:26:10.055118  233018 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:26:10.055221  233018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:26:10.055229  233018 out.go:358] Setting ErrFile to fd 2...
	I0819 12:26:10.055233  233018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:26:10.055390  233018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:26:10.055561  233018 out.go:352] Setting JSON to false
	I0819 12:26:10.055589  233018 mustload.go:65] Loading cluster: multinode-844915
	I0819 12:26:10.055627  233018 notify.go:220] Checking for updates...
	I0819 12:26:10.055944  233018 config.go:182] Loaded profile config "multinode-844915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:26:10.055958  233018 status.go:255] checking status of multinode-844915 ...
	I0819 12:26:10.056321  233018 cli_runner.go:164] Run: docker container inspect multinode-844915 --format={{.State.Status}}
	I0819 12:26:10.073318  233018 status.go:330] multinode-844915 host status = "Running" (err=<nil>)
	I0819 12:26:10.073364  233018 host.go:66] Checking if "multinode-844915" exists ...
	I0819 12:26:10.073652  233018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844915
	I0819 12:26:10.090446  233018 host.go:66] Checking if "multinode-844915" exists ...
	I0819 12:26:10.090754  233018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:26:10.090808  233018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844915
	I0819 12:26:10.107660  233018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/multinode-844915/id_rsa Username:docker}
	I0819 12:26:10.194766  233018 ssh_runner.go:195] Run: systemctl --version
	I0819 12:26:10.198724  233018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:26:10.208859  233018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:26:10.255393  233018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-19 12:26:10.246447692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 12:26:10.255962  233018 kubeconfig.go:125] found "multinode-844915" server: "https://192.168.67.2:8443"
	I0819 12:26:10.255989  233018 api_server.go:166] Checking apiserver status ...
	I0819 12:26:10.256022  233018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:26:10.266220  233018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup
	I0819 12:26:10.274518  233018 api_server.go:182] apiserver freezer: "13:freezer:/docker/44dd451c8a50fdb422604285f4a2a8a8a1234ed8b78f0bd8c729b6e2373ccf22/crio/crio-aa589ff41241ed0b5d563060a9bf13b10563f0cc52cc9732d811509d5bd02883"
	I0819 12:26:10.274584  233018 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/44dd451c8a50fdb422604285f4a2a8a8a1234ed8b78f0bd8c729b6e2373ccf22/crio/crio-aa589ff41241ed0b5d563060a9bf13b10563f0cc52cc9732d811509d5bd02883/freezer.state
	I0819 12:26:10.282507  233018 api_server.go:204] freezer state: "THAWED"
	I0819 12:26:10.282536  233018 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 12:26:10.286197  233018 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 12:26:10.286227  233018 status.go:422] multinode-844915 apiserver status = Running (err=<nil>)
	I0819 12:26:10.286238  233018 status.go:257] multinode-844915 status: &{Name:multinode-844915 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:26:10.286256  233018 status.go:255] checking status of multinode-844915-m02 ...
	I0819 12:26:10.286501  233018 cli_runner.go:164] Run: docker container inspect multinode-844915-m02 --format={{.State.Status}}
	I0819 12:26:10.303699  233018 status.go:330] multinode-844915-m02 host status = "Running" (err=<nil>)
	I0819 12:26:10.303724  233018 host.go:66] Checking if "multinode-844915-m02" exists ...
	I0819 12:26:10.303986  233018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844915-m02
	I0819 12:26:10.321340  233018 host.go:66] Checking if "multinode-844915-m02" exists ...
	I0819 12:26:10.321602  233018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:26:10.321646  233018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844915-m02
	I0819 12:26:10.339971  233018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19479-77145/.minikube/machines/multinode-844915-m02/id_rsa Username:docker}
	I0819 12:26:10.422600  233018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:26:10.432572  233018 status.go:257] multinode-844915-m02 status: &{Name:multinode-844915-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:26:10.432612  233018 status.go:255] checking status of multinode-844915-m03 ...
	I0819 12:26:10.432880  233018 cli_runner.go:164] Run: docker container inspect multinode-844915-m03 --format={{.State.Status}}
	I0819 12:26:10.450166  233018 status.go:330] multinode-844915-m03 host status = "Stopped" (err=<nil>)
	I0819 12:26:10.450203  233018 status.go:343] host is not running, skipping remaining checks
	I0819 12:26:10.450212  233018 status.go:257] multinode-844915-m03 status: &{Name:multinode-844915-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-844915 node start m03 -v=7 --alsologtostderr: (8.07225625s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844915
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-844915
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-844915: (24.607421064s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844915 --wait=true -v=8 --alsologtostderr
E0819 12:26:47.138404   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844915 --wait=true -v=8 --alsologtostderr: (1m17.734711657s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844915
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-844915 node delete m03: (4.623627208s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-844915 stop: (23.480567802s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844915 status: exit status 7 (81.736216ms)

                                                
                                                
-- stdout --
	multinode-844915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr: exit status 7 (80.943613ms)

                                                
                                                
-- stdout --
	multinode-844915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:28:30.371127  242748 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:28:30.371373  242748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:28:30.371381  242748 out.go:358] Setting ErrFile to fd 2...
	I0819 12:28:30.371385  242748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:28:30.371557  242748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:28:30.371710  242748 out.go:352] Setting JSON to false
	I0819 12:28:30.371738  242748 mustload.go:65] Loading cluster: multinode-844915
	I0819 12:28:30.371926  242748 notify.go:220] Checking for updates...
	I0819 12:28:30.372116  242748 config.go:182] Loaded profile config "multinode-844915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:28:30.372133  242748 status.go:255] checking status of multinode-844915 ...
	I0819 12:28:30.372533  242748 cli_runner.go:164] Run: docker container inspect multinode-844915 --format={{.State.Status}}
	I0819 12:28:30.391228  242748 status.go:330] multinode-844915 host status = "Stopped" (err=<nil>)
	I0819 12:28:30.391252  242748 status.go:343] host is not running, skipping remaining checks
	I0819 12:28:30.391258  242748 status.go:257] multinode-844915 status: &{Name:multinode-844915 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:28:30.391286  242748 status.go:255] checking status of multinode-844915-m02 ...
	I0819 12:28:30.391521  242748 cli_runner.go:164] Run: docker container inspect multinode-844915-m02 --format={{.State.Status}}
	I0819 12:28:30.407883  242748 status.go:330] multinode-844915-m02 host status = "Stopped" (err=<nil>)
	I0819 12:28:30.407909  242748 status.go:343] host is not running, skipping remaining checks
	I0819 12:28:30.407917  242748 status.go:257] multinode-844915-m02 status: &{Name:multinode-844915-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844915 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 12:28:58.945927   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844915 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (47.225919152s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844915 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844915
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844915-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-844915-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.901367ms)

                                                
                                                
-- stdout --
	* [multinode-844915-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-844915-m02' is duplicated with machine name 'multinode-844915-m02' in profile 'multinode-844915'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844915-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844915-m03 --driver=docker  --container-runtime=crio: (24.219356536s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-844915
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-844915: exit status 80 (253.287081ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-844915 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-844915-m03 already exists in multinode-844915-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-844915-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-844915-m03: (1.824617553s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                    
x
+
TestPreload (193.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-921626 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 12:30:22.010405   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:24.071835   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-921626 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (2m36.828796314s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-921626 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-921626 image pull gcr.io/k8s-minikube/busybox: (2.651598197s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-921626
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-921626: (5.681979569s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-921626 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-921626 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.018249879s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-921626 image list
helpers_test.go:175: Cleaning up "test-preload-921626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-921626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-921626: (2.275447828s)
--- PASS: TestPreload (193.67s)

                                                
                                    
x
+
TestScheduledStopUnix (100.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-908558 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-908558 --memory=2048 --driver=docker  --container-runtime=crio: (23.758778362s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-908558 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-908558 -n scheduled-stop-908558
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-908558 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-908558 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-908558 -n scheduled-stop-908558
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-908558
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-908558 --schedule 15s
E0819 12:33:58.946219   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-908558
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-908558: exit status 7 (63.710095ms)

                                                
                                                
-- stdout --
	scheduled-stop-908558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-908558 -n scheduled-stop-908558
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-908558 -n scheduled-stop-908558: exit status 7 (65.803895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-908558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-908558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-908558: (5.066105306s)
--- PASS: TestScheduledStopUnix (100.07s)

                                                
                                    
x
+
TestInsufficientStorage (9.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-245314 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-245314 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.412311912s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff55bcd1-0730-44db-a917-aae9671670bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-245314] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83a7c59d-d9fb-40bd-9a56-7205defb4541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"49d695fb-6747-4511-b25e-c2e17e09537e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9e1dc230-454d-4b5b-a7f1-9bf87330e518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig"}}
	{"specversion":"1.0","id":"8d838429-8340-4ffa-aad4-c4c7a599a97a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube"}}
	{"specversion":"1.0","id":"5a118bf3-bd5e-42e8-9fb4-7ec5ecfc2e3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4c37a1b3-115b-420a-bfa6-9070773cfb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9a7014d0-9900-479f-9a21-e1a4adf0a0f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2c1cb2b4-e6a9-461b-9c39-bcd9ccb08ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2f40729d-1ebd-4133-9431-6bee986e5a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d36d6533-b11f-4826-890c-ad66ec95ba67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e513b864-a0c1-4bd2-8548-08e411bf6c41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-245314\" primary control-plane node in \"insufficient-storage-245314\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"01ab178c-cf14-452a-995d-446aedc94eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"161ff9a5-1bae-4543-a033-03d381e33a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f72819cd-5da0-44c4-827d-94c6deb2ddb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-245314 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-245314 --output=json --layout=cluster: exit status 7 (251.353323ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-245314","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-245314","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:34:49.901445  265652 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-245314" does not appear in /home/jenkins/minikube-integration/19479-77145/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-245314 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-245314 --output=json --layout=cluster: exit status 7 (249.411794ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-245314","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-245314","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:34:50.151678  265750 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-245314" does not appear in /home/jenkins/minikube-integration/19479-77145/kubeconfig
	E0819 12:34:50.161282  265750 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/insufficient-storage-245314/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-245314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-245314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-245314: (1.796009201s)
--- PASS: TestInsufficientStorage (9.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4274403033 start -p running-upgrade-663693 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4274403033 start -p running-upgrade-663693 --memory=2200 --vm-driver=docker  --container-runtime=crio: (23.615653009s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-663693 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-663693 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.173752465s)
helpers_test.go:175: Cleaning up "running-upgrade-663693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-663693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-663693: (4.364446542s)
--- PASS: TestRunningBinaryUpgrade (98.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.781415364s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-226203
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-226203: (1.193504106s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-226203 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-226203 status --format={{.Host}}: exit status 7 (70.241663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.60352778s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-226203 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (62.5911ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-226203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-226203
	    minikube start -p kubernetes-upgrade-226203 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2262032 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-226203 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.417404358s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-226203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-226203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-226203: (2.504345578s)
--- PASS: TestKubernetesUpgrade (353.69s)

                                                
                                    
x
+
TestMissingContainerUpgrade (109.68s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.540907734 start -p missing-upgrade-379090 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.540907734 start -p missing-upgrade-379090 --memory=2200 --driver=docker  --container-runtime=crio: (33.08884685s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-379090
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-379090: (19.220283927s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-379090
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-379090 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-379090 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.722404987s)
helpers_test.go:175: Cleaning up "missing-upgrade-379090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-379090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-379090: (4.980208931s)
--- PASS: TestMissingContainerUpgrade (109.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.395579ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-551127] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (52.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928484 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-928484 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.77925901s)
--- PASS: TestPause/serial/Start (52.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551127 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551127 --driver=docker  --container-runtime=crio: (28.687303344s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-551127 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.772359076 start -p stopped-upgrade-568994 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.772359076 start -p stopped-upgrade-568994 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m37.804786673s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.772359076 -p stopped-upgrade-568994 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.772359076 -p stopped-upgrade-568994 stop: (3.286439686s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-568994 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-568994 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.199038922s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --driver=docker  --container-runtime=crio
E0819 12:35:24.073081   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --driver=docker  --container-runtime=crio: (5.459780137s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-551127 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-551127 status -o json: exit status 2 (299.884046ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-551127","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-551127
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-551127: (2.0628773s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551127 --no-kubernetes --driver=docker  --container-runtime=crio: (7.758254467s)
--- PASS: TestNoKubernetes/serial/Start (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-551127 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-551127 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.421724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-551127
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-551127: (1.178241995s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551127 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551127 --driver=docker  --container-runtime=crio: (7.223646869s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928484 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-928484 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.833847865s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-551127 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-551127 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.01007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-714939 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-714939 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (146.009391ms)

                                                
                                                
-- stdout --
	* [false-714939] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:35:53.161708  283966 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:35:53.161934  283966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:35:53.161947  283966 out.go:358] Setting ErrFile to fd 2...
	I0819 12:35:53.161961  283966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:35:53.162270  283966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-77145/.minikube/bin
	I0819 12:35:53.163237  283966 out.go:352] Setting JSON to false
	I0819 12:35:53.164779  283966 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8248,"bootTime":1724062705,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:35:53.164842  283966 start.go:139] virtualization: kvm guest
	I0819 12:35:53.167013  283966 out.go:177] * [false-714939] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:35:53.168210  283966 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:35:53.168268  283966 notify.go:220] Checking for updates...
	I0819 12:35:53.170695  283966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:35:53.171987  283966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-77145/kubeconfig
	I0819 12:35:53.173194  283966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-77145/.minikube
	I0819 12:35:53.174588  283966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:35:53.175926  283966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:35:53.177741  283966 config.go:182] Loaded profile config "force-systemd-env-033817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:35:53.177977  283966 config.go:182] Loaded profile config "pause-928484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:35:53.178093  283966 config.go:182] Loaded profile config "stopped-upgrade-568994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 12:35:53.178215  283966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:35:53.199889  283966 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:35:53.200049  283966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:35:53.250662  283966 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:69 SystemTime:2024-08-19 12:35:53.240672181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 12:35:53.250801  283966 docker.go:307] overlay module found
	I0819 12:35:53.252743  283966 out.go:177] * Using the docker driver based on user configuration
	I0819 12:35:53.254163  283966 start.go:297] selected driver: docker
	I0819 12:35:53.254185  283966 start.go:901] validating driver "docker" against <nil>
	I0819 12:35:53.254201  283966 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:35:53.256190  283966 out.go:201] 
	W0819 12:35:53.257278  283966 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 12:35:53.258484  283966 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-714939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-928484
contexts:
- context:
cluster: pause-928484
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-928484
name: pause-928484
current-context: pause-928484
kind: Config
preferences: {}
users:
- name: pause-928484
user:
client-certificate: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.crt
client-key: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-714939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714939"

                                                
                                                
----------------------- debugLogs end: false-714939 [took: 2.868393195s] --------------------------------
helpers_test.go:175: Cleaning up "false-714939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-714939
--- PASS: TestNetworkPlugins/group/false (3.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-928484 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-928484 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-928484 --output=json --layout=cluster: exit status 2 (320.89448ms)

                                                
                                                
-- stdout --
	{"Name":"pause-928484","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-928484","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-928484 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-928484 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-928484 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-928484 --alsologtostderr -v=5: (3.953884544s)
--- PASS: TestPause/serial/DeletePaused (3.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.810062627s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-928484
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-928484: exit status 1 (16.806612ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-928484: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-568994
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (126.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-329006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-329006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m6.201811038s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (126.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-131612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 12:38:58.946547   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-131612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (59.460939937s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-131612 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4fb3a673-24cc-4727-a19c-53aea9fb3056] Pending
helpers_test.go:344: "busybox" [4fb3a673-24cc-4727-a19c-53aea9fb3056] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4fb3a673-24cc-4727-a19c-53aea9fb3056] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0036206s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-131612 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-131612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-131612 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-131612 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-131612 --alsologtostderr -v=3: (11.821458656s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-131612 -n no-preload-131612
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-131612 -n no-preload-131612: exit status 7 (72.527791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-131612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-131612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 12:40:24.070744   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-131612 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.67380479s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-131612 -n no-preload-131612
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-329006 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c43d10a-0c0f-48dd-991e-d4941cbb1fb6] Pending
helpers_test.go:344: "busybox" [4c43d10a-0c0f-48dd-991e-d4941cbb1fb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c43d10a-0c0f-48dd-991e-d4941cbb1fb6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004082859s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-329006 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-329006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-329006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-329006 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-329006 --alsologtostderr -v=3: (12.370455952s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-249755 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-249755 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (44.117041946s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329006 -n old-k8s-version-329006
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329006 -n old-k8s-version-329006: exit status 7 (77.505001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-329006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-329006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-329006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.986296306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329006 -n old-k8s-version-329006
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-249755 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c04c8b9-386e-4a84-bd11-761f922d54af] Pending
helpers_test.go:344: "busybox" [2c04c8b9-386e-4a84-bd11-761f922d54af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c04c8b9-386e-4a84-bd11-761f922d54af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003821914s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-249755 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-249755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-249755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025961238s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-249755 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-249755 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-249755 --alsologtostderr -v=3: (12.082078914s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-249755 -n embed-certs-249755
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-249755 -n embed-certs-249755: exit status 7 (65.037775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-249755 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (276s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-249755 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-249755 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m35.688433961s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-249755 -n embed-certs-249755
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (276.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-560871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-560871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (43.497139087s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-560871 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e01c6614-4a88-468e-a3de-b86eb8b3a1d9] Pending
helpers_test.go:344: "busybox" [e01c6614-4a88-468e-a3de-b86eb8b3a1d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e01c6614-4a88-468e-a3de-b86eb8b3a1d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005431376s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-560871 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mghx4" [1eb9d0b2-cbca-4727-ab55-3e61b3187160] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003401062s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-560871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-560871 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-560871 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-560871 --alsologtostderr -v=3: (11.818449875s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mghx4" [1eb9d0b2-cbca-4727-ab55-3e61b3187160] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003901439s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-329006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-329006 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-329006 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329006 -n old-k8s-version-329006
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329006 -n old-k8s-version-329006: exit status 2 (270.579667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329006 -n old-k8s-version-329006
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329006 -n old-k8s-version-329006: exit status 2 (272.282158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-329006 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329006 -n old-k8s-version-329006
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329006 -n old-k8s-version-329006
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871: exit status 7 (70.943774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-560871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-560871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-560871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m35.821280349s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (26.016940506s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-073796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-073796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-073796 --alsologtostderr -v=3: (1.858258417s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796: exit status 7 (67.071173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-073796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 12:43:58.946030   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/functional-791037/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.619537735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-htjnm" [dced2646-4500-42b5-b6f1-f0255656c489] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003962187s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-073796 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-073796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796: exit status 2 (282.245792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796: exit status 2 (297.895079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-073796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-htjnm" [dced2646-4500-42b5-b6f1-f0255656c489] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003793232s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-131612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (48.559162481s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-131612 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-131612 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-131612 --alsologtostderr -v=1: (1.049878449s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-131612 -n no-preload-131612
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-131612 -n no-preload-131612: exit status 2 (279.390189ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-131612 -n no-preload-131612
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-131612 -n no-preload-131612: exit status 2 (283.883838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-131612 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-131612 -n no-preload-131612
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-131612 -n no-preload-131612
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.779720651s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nmb8d" [f5f5dd50-d120-4c60-81dc-3dfa53d8347a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nmb8d" [f5f5dd50-d120-4c60-81dc-3dfa53d8347a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.00391942s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rvmvp" [c0c383e2-88b1-4ffe-ba96-65e4a260f754] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00407585s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j9nzl" [54d93b15-3f3a-474a-bc3a-1da1b5328903] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j9nzl" [54d93b15-3f3a-474a-bc3a-1da1b5328903] Running
E0819 12:45:24.071116   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/addons-010148/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.385446   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.391845   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.403286   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.424712   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.466212   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.547714   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:45:26.709799   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003353984s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0819 12:45:27.032080   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0819 12:45:36.639895   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.192485978s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0819 12:45:46.881583   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:46:07.363399   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.390875553s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c22gp" [89f2495a-76c9-46fd-b718-a7c9c65bb6a8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00336526s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-84lsf" [258b2aff-f621-424e-a5e2-5d2154c3d40f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004065698s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c22gp" [89f2495a-76c9-46fd-b718-a7c9c65bb6a8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004805175s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-249755 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hbh6d" [41223371-4388-47a6-90a4-d20309d19fd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hbh6d" [41223371-4388-47a6-90a4-d20309d19fd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004321333s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-249755 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vvhgv" [e33124d4-23a7-4782-8172-7d6da8eb88e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vvhgv" [e33124d4-23a7-4782-8172-7d6da8eb88e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003991926s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-249755 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-249755 -n embed-certs-249755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-249755 -n embed-certs-249755: exit status 2 (373.859057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-249755 -n embed-certs-249755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-249755 -n embed-certs-249755: exit status 2 (320.042857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-249755 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-249755 -n embed-certs-249755
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-249755 -n embed-certs-249755
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (34.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (34.859864479s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (34.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.543446175s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-714939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.782252105s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zsq7x" [589295cf-0c9c-4158-a4ad-826de33288cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zsq7x" [589295cf-0c9c-4158-a4ad-826de33288cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003182807s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nrf8p" [07c50608-ef59-473e-90fe-05f5de70da73] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004498483s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s4v58" [74a191f6-5d6a-4642-a1a6-eade5ad88efd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003834545s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rgm7b" [ea59cc0a-8f51-4744-85d6-e508aeb2a6f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rgm7b" [ea59cc0a-8f51-4744-85d6-e508aeb2a6f3] Running
E0819 12:48:10.247413   83914 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/old-k8s-version-329006/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004398669s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s4v58" [74a191f6-5d6a-4642-a1a6-eade5ad88efd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003692024s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-560871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-560871 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-560871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871: exit status 2 (277.525447ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871: exit status 2 (276.178126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-560871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-560871 -n default-k8s-diff-port-560871
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-714939 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-714939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tjhnm" [38f385ca-2faf-49dd-870c-d7b3288f8d2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tjhnm" [38f385ca-2faf-49dd-870c-d7b3288f8d2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004135424s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-714939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-714939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-023021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-023021
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-714939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-928484
contexts:
- context:
cluster: pause-928484
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-928484
name: pause-928484
current-context: pause-928484
kind: Config
preferences: {}
users:
- name: pause-928484
user:
client-certificate: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.crt
client-key: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-714939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714939"

                                                
                                                
----------------------- debugLogs end: kubenet-714939 [took: 2.967204939s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-714939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-714939
--- SKIP: TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-714939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-714939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19479-77145/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-928484
contexts:
- context:
cluster: pause-928484
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:35:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-928484
name: pause-928484
current-context: pause-928484
kind: Config
preferences: {}
users:
- name: pause-928484
user:
client-certificate: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.crt
client-key: /home/jenkins/minikube-integration/19479-77145/.minikube/profiles/pause-928484/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-714939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-714939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714939"

                                                
                                                
----------------------- debugLogs end: cilium-714939 [took: 3.074198356s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-714939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-714939
--- SKIP: TestNetworkPlugins/group/cilium (3.22s)

                                                
                                    
Copied to clipboard